Spectrum Issue 7

Page 1

Sp 07

Spectrum

H O R A C E M A N N ’ S P R E M I E R S C I E N C E P U B L I C AT I O N • M A Y 2 0 1 3

1


NOTE from the EDITOR

Dear Readers, Earlier this year, in an article on Discovery News, I read about how Francis Schwarze, a researcher at the Swiss Federal Laboratories for Materials Testing and Research, made an unexpected discovery when he was checking up on the health of certain trees, using soundwaves. Schwarze discovered that wood-attacking fungi cause wood to have certain interesting acoustic properties. Along with a violin maker, Michael Rhonheimer, Schwarze decided to build a violin made with fungi-infected wood to see how it sounded. He built the top plate using spruce, and sycamore or maple for the bottom plate. He took each type of wood, submerged it in water, and allowed fungi to grow on the plates. After letting the wood become sufficiently affected by the fungi, they created a few violins from this wood. Tested against a Stradivarius violin, some listeners identified the fungi-violins as the Strad. Maybe in the future , violin makers will start to use fungus to create instruments! Science is truly everywhere. As a violinist myself, I love learning more about music, especially from a scientific viewpoint. This project demonstrates how science can also take you anywhere-Schwarze started from checking the health of trees, and wound up in violin making! The articles in this issue explore any topic the writers chose, showcasing the range of interests Horace Mann students have in science. In addition, we feature the research or independent projects of three seniors, including Roya Moussapour (12), who built an electric violin! As we get ready to pass on the magazine to the juniors, I have been reflecting on all Spectrum has meant to me these past few years. I have learned so much from reading and editing all these articles on so many different topics I would have never learned about otherwise! I know the juniors will do a great job with the magazine next year, as this year, we could not have done it without their help. This issue was primarily their work. I’ll miss everyone from Spectrum and I wish the juniors luck for next year! Deepti Raghavan Editor in Chief

Spectrum is a student publication. Its contents are the views and work of the students and do not necessarily represent those of the faculty or administration of the Horace Mann School. The Horace Mann School is not responsible for the accuracy and contents of Spectrum, and is not liable for any claims based on the contents or view expressed therein. The opinions represented are those of the writers and do not necessarily represent those of the editorial board. The editorial represents the opinion of the majority of the Editorial Board. All photos not credited are from creativecommons.org. All editorial decisions regarding grammar, content, and layout are made by the Editorial Board. All queries and complaints should be directed to the Editor-In-Chief. Please address these comments by e-mail, to hmspectrum@gmail.com. Spectrum recognizes an ethical responsibility to correct all its factual errors, large and small (even misspellings of names), promptly and in a prominent reserved space in the magazine. A complaint from any source should be relayed to a responsible editor and will be investigated quickly. If a correction is warranted, it will follow immediately.

2

Deepti Raghavan Editor-in-Chief

Jay Moon

Production Director

Jay Palekar Justin Bleuel Executive Editors

Michael Herschorn Managing Editor

Juliet Zou

Business Manager

James Apfel Senior Columnist

David Zask News Editor

Joanna Cho Yang Fei Ricardo Fernandez Jennifer Heon Mihka Kapoor Henry Luo Teddy Reiss Amanda Zhou Brenda Zhou Junior Editors

Dr. Jeff Weitz Faculty Advisor


SECTION 1 • PAGE 5

BIOLOGY& HEALTH Mediterranean Diet

5

Sugar

6

Phobia

7

A Breakdown of the Immune System Kundan Guha

8

Stem Cell Transplants

9

Robots in Surgery

10

Genome Sequencing

11

The Human Genome Unveiled

12

Turning Living Cells into Computers

14

Parallel Processing

15

Brain Activity Map Elizabeth Xiong

16

The Brain and Music

18

Jenna Karp

Grant Ackerman

Rebecca Okin

JamesKon

Isabelle Friesner

Lauren Hooda

Sam Lurye

Ethan Gelfer Ajay Shyam

Eliza Christman-Cohen

SECTION 2 • PAGE 19

TECHNOLOGY A Digital Microscope Lily McCarthy

Military Technologies Will Ellison

Google Glass Jason Ginsberg

19 20 22 3


Thorium

23

Hydroelectric Power

24

Chelyabinsk Meteor

26

Simultaneity and Synchronicity

25

Planck Satellite

28

Space Debris

29

Aditya Ram

Veer Sobti

Kasia Kalinowska

Abigail Zuckerman

Dan Yahalomi

Jeffrey Weiner

SECTION 3 • PAGE 31

FEATURES Oxygen Liberation

30

Model City

32

The Big Leap

34

The Mona Lisa

36

Ricardo Fernandez

Lauren Futter

Josh Siegel Sam Stern

SECTION 4 • PAGE 37

RESEARCH Paige Burris

37

Roya Moussapour

38

Michael Herschorn

39

Our Mission: To encourage students to find topics in science that interest them and move them to explore these sparks. We believe that science is exciting, interesting and an intergral part of our futures. By diving into science we can only come out more knolwedgable.

4


Francis JIimenez Meca

The Mediterranean

Diet

By Jenna Karp

Common foods in Mediterranean Diet.

People have wondered for years about the cardiovascular benefits of the Mediterranean diet, a nutritional plan that became popular in America during the 1990s. The Mediterranean diet, rich in olive oil, nuts, beans, fish, and produce, is inspired by the traditional foods consumed by the southern Italians, Greeks, and Spaniards. The consumption of dairy products, red meat, and processed foods is circumscribed through. For the first time, nutritional researchers have proven that this diet has cardiovascular benefits, perhaps as many benefits as medication. The Mediterranean diet’s impact on cardiovascular disease had been unclear until recently. According to Gina Kolata, who studied biology at MIT and writes for the New York Times: “Until now [February 2013], evidence that the Mediterranean diet reduced the risk of heart disease was weak, based mostly on studies showing that people from Mediterranean countries seemed to have lower rates of heart disease — a pattern that could have been attributed to factors other than diet.” A study that

tracked patients at risk of cardiovascular disease and contained a control group of was necessary. That crucial study was finally initiated in 2003 by Spanish researchers, who observed 7,447 people over five years. The participants in this study included men aged 55 to 80 and women aged 60 to 80. Although these subjects had no cardiovascular disease at enrollment, they were extremely likely to develop such illness. Many participants smoked, were overweight, or had a family history of premature coronary heart disease. The participants were split into three groups. A control group consumed a traditional low fat diet. Another group ate a Mediterranean diet that included at least 4 tablespoons a day of extra-virgin olive oil. The last group ate a Mediterranean diet and had one additional ounce total each day of hazelnuts, almonds, and walnuts, which are rich in the omega-3 fatty acid alpha-linolenic acid. After five years, the researchers found a 30 percent reduction in the rate of heart-related diseases, especially strokes, among the Med-

iterranean diet eaters versus those who followed a regular low fat diet. The control group had a total of 109 heart attacks, strokes or deaths from heart disease during the study. However, there were just 83 instances of heart-related disease in the Mediterranean group that consumed extra nuts, and 96 in the Mediterranean group that consumed additional olive oil. Besides its nutritional benefits, there must be another reason why the Mediterranean diet is successful. The Mediterranean diet seems easier to maintain than a more restrictive low fat regimen. Participants in the Spanish study assigned to the low fat diet had difficulty sticking to it and required additional counseling to do so, whereas those assigned to the Mediterranean diet had an easier time abiding by it. Perhaps the inclusion of “good fat” in the Mediterranean diet keeps adherents satisfied but healthy at the same time. The user-friendly Mediterranean diet therefore offers a powerful weapon in the fight against cardiovascular disease.

5


Lollipops are a type of candy. Candy is a main source of sugar for ingestion, especially for children.

sugar

Although sugar has been extracted from sugarcane in the Far East since ancient times, it was not available to the public until the 18th century. Since then, sugar has become so widely obtainable that it quickly developed into a necessity, even though it is so unhealthy. From an evolutionary standpoint, humans generally love sugar because the primates they originated from consumed sugar to gain a large quantity of quick energy. For the primates, sugar was hard to find; it was impossible for them to consume too much. Today, however, people have greater access to sugar; so, it is easy to overindulge. Sugar loosely refers to carbohydrates, including monosaccharaides and disaccharides. Almost all sugars have the molecular formula CnH2nOn, where n can range from 3 to 7. In other words, there are over 50 different types of sugar, including glucose, sucrose, and fructose. High fructose corn syrup (HFCS) is often vilified as being worse than regular table sugar (sucrose), but this accusation is false. HFCS and sucrose have almost the same chemical makeup; sucrose contains 50 percent fructose, while HFCS has approximately 50 percent fructose as well. The reason that HFCS is used in abundance is that it is usually significantly cheaper than sugar. Recently, scientists are discovering that sugar has greater effects than just extra calories. Dr. Robert Lustig, a pediatric endocrinologist at the University of California, San Francisco, argues that sugar is a toxin. He claims that sugar causes diseases such as obesity, type two diabetes, hypertension, and heart disease. The suspects of these fatal illnesses are not only table sugar, honey, and sugary drinks but also almost every processed food imaginable. According to Lustig, sugar is often hidden in sauces, bread, peanut butter, and more. Americans now consume about 130 pounds of sugar annually per person or a third of a pound daily. Dr. Kimber Stanhope, a nutritional biologist at the University of California, Davis, also studies the effects of sugar on health. In one of her studies, she first made her test subjects eat controlled diets low in added sugar to determine a baseline for her experiment. She then replaced 25 percent of their calories with sweetened drinks and observed their bodies’ reaction to the change. She found that the people who had the

6

dhammza, Flickr Photo Sharing

by Grant Ackerman

increase in sugar in their diets also had substantial increased blood levels of LDL cholesterol and other risk factors linked to heart disease within just two weeks. This is because when the bloodstream is overburdened with sugar, the liver converts some of the sugar to fat, which can end up in the bloodstream and generate LDL cholesterol. LDL particles often get stuck in the bloodstream and form plaque that is associated with heart attacks. Dr. Lewis Cantley, a professor in the Departments of Systems Biology and Medicine at Harvard Medical School and Director of Cancer Research at the Beth Israel Deaconess Medical Center, believes that increased sugar intake upsurges the risk of developing cancer. About a third of common cancers, including breast and colon cancer, have insulin receptors on their cell surfaces. Insulin signals tumor cells to start consuming glucose, and this increased ingestion causes tumors to grow larger. Dr. Eric Stice, a neuroscientist at the Oregon Research Institute, is finding that sugar activates the brain in a way similar to how drugs like cocaine affect the brain. Using MRI scanners, Stice noticed that when a subject takes a sip of soda, the reward region of the brain responds by releasing dopamine. This evidence suggests that sugar could be as addictive as drugs. In addition, Stice has learned through his studies that when people consume sugar frequently, their reward region responds less to the sugar. People who consume a lot of sugar, therefore, can build up a tolerance, similar to the tolerance abusive drug users build up. In order to feel the same satisfaction, they will have to eat more sugar than ever and doing so will increase the risk of diseases, resulting in a vicious cycle. Should we stop eating added sugar altogether? Cantley has started implementing this method, but most people cannot totally eliminate sugar from their diets. A more realistic approach is to cut down on the sugar intake. Lustig suggests that men consume no more than 150 calories from added sugars a day and no more than 100 for women. Even this seems unrealistic, but extremes are necessary. According to Lustig, sugar belongs in the same boat as tobacco and alcohol.


D Sharon Pruitt, Wikipedia Commons

PHOBIA By Rebecca Okin

Achluophobics, such as the child pictured above, fear the possible dangers hiding in the dark more than darkness itself.

From the common Altophobia (fear of heights) to the ironically titled Sesquipedalophobia (fear of long words), phobias permeate throughout society, sometimes driving individuals to have abnormal behavior. What is phobia? Phobia is a severe fear of a specific activity, object, or situation. In order to properly diagnose a patient with a phobia, doctors ask the patient a series of questions about the subject. For example, a doctor diagnosing a patient with agoraphobia, the fear of open spaces, would ask the patient about his/her behavior within open areas such as farmland and plazas. In some circumstances, patients test themselves by using what Anxiety UK, a British charity, calls a “Do-It-Yourself diagnosis�, a list of various scenarios that may or may not make the test-taker anxious. Interestingly, researchers have found that first-degree relatives of someone with a phobia are three times more likely to suffer from a phobia than relatives of someone without a phobia. However, scientists are still unable to isolate a gene directly associated with the general anxiety disorder. In contrast, new research has also found that brain chemistry may be an alternate cause of phobia. According to some published studies, patients with phobias tend to have significantly high or low serotonin levels.

Life experiences are also theorized to be possible origins of phobias. Many people develop their fears through memorable negative experiences stored in the lateral nucleus of the amygdala in the brain’s temporal lobe. On the other hand, other patients claim to have never had an interaction with their phobic stimuli. As a result, psychologists have developed a theory stating that some phobias have been evolutionarily developed as a survival instinct for human beings. People who harbored a fear of dangerous objects and endeavors, such as spiders, heights, and lightning, have been more likely to survive and pass on the trait. This theory may explain the development of achluophobia, the common fear of the dark. This phobia encompasses the fear of the possible dangers that darkness hides rather than fear of darkness itself. Millions of years ago, cavemen venturing out at night were always vulnerable to animal attacks. Thus, whether a person fell victim to an attack or not, an individual feared the possibility of being attacked. Overtime, these individual fears were enough to develop into an innate phobia of darkness for the whole population. Today, there are many methods of treatment available, such as the use of selective serotonin reuptake inhibitors in antidepressants. As individuals start to develop new fears, who knows what phobias may develop in the future?

7


A Breakdown of the

Immune System

Eneas de Troya, Wikimedia Commons

By Kundan Guha

The flu shot, as pictured above, is administered to millions of people in preparation for the flu outbreak that occurs annually. It is crucial in providing resistance to the various strains of influenza but is still a work in progress itself.

E

very winter, people fall ill to certain infections, often suffering from very similar symptoms. Of course, that is because these infections are all due to the same virus: influenza or “the flu”. Despite our best efforts to protect ourselves from influenza, it infects millions of people annually and causes anywhere between 200,000 and 500,000 deaths in that time span. To combat the virus, scientists have created the trivalent influenza vaccine, which is more commonly known as the flu shot. However, even with the flu shot, influenza is a big threat, and we do not understand why. The answer lies in how vaccines and the immune system work. When any disease-causing microorganism, or pathogen, enters the human body, it triggers an immune response within the body. This response can be activated in various ways. A helper T-lymphocyte finds a pathogen and recognizes it as a foreign substance, or antigen. Helper T-Cells act as the primers to the immune system, or the generals of the immune army. Helper T-cells are presented antigens by the expression of class II Major Histocompatibility Complexes (MHC) on cell surfaces. The helper T-cells then proceed to call in the army by priming killer T-cells and B cells. In the

8

case of a virus, such as Influenza, the B cells start producing antibodies tailored to the virus. Antibodies are the body’s equivalent to cruise missiles, seeking out a specific target on the antigen and binding to it with the protein on the tip of the antibody. This tip is a variable protein, created by the activated B cell, specifically designed for whatever antigen caused the response. However, this is a lengthy process and takes time to fully activate. In this time, the antigen can wreak havoc upon the body and spread further and further. The body’s solution is to speed up future responses. Upon being activated a B cell splits up into plasmocytes and memory B cells. Plasmocytes are the antibody production factories and initially produce the large antibody IgM that has many binding sites, due to its large size, to catch antigens with. Memory B cells, as the name suggests, store the memory of the antigen and will propagate throughout the body even after the infection is dealt with. In the event that the body comes into contact with the same antigen again at a later time, Memory B cells can bypass the T-cell priming and immediately start to divide into plasmocytes. This time, the plasmocytes will produce mainly Immunoglobulin G (IgG) instead of IgM. The faster response time ultimately

will lead to a much shorter and less severe illness or no illness at all. Vaccines serve to create those memory B cells without actually infecting the recipient. This way in the event that a person is infected with Influenza, his or her body will be able to react to the infection and prime faster. However, a vaccine will only work on the specific strain of the virus it was designed for. Therefore, if a vaccinated person were to encounter a completely different strain of Influenza, his or her memory B cells would not recognize it, and naïve B cells would have to differentiate into new memory B cells and plasmocytes for this infection. It is precisely for this reason that the most common flu shot includes antigens for three strains of influenza, as opposed to just one. Influenza mutates rapidly, and it will have mutated sufficiently to be unrecognizable by your old memory B cells by the next seasonal epidemic. Therefore, a new vaccine is required each year. Without a means of immunizing humans for greater periods of time against fast mutating viruses, it will be impossible to completely protect ourselves from Influenza. No matter how advanced our technology is, there is no beating our own body and the many pathogens out there.


Expanding Your Options

Tareq Salahuddin, Flickr Photo Sharing

New Stem Cell Transplants Provide an Alternative Treatment

This photo depicts a person working on stem cell research in a lab.

By James Kon A patient is about to undergo treatment for leukemia, a disease that causes the bone marrow to produce abnormal white blood cells. These defects can cause anemia, bleeding, and infection. As a result, the patient needs an Autologous Hematopoietic Stem Cell Transplant, a type of bone marrow transplant, which removes the patient’s hematopoietic stem cells before he/she has any signs of disease or has undergone treatment to eliminate any of these malignant cells. Afterward, those cells are inserted back into the patient’s body. Georges Mathé was the first to execute a bone marrow transplant, but E. Donnall Thomas, at the Fred Hutchinson Cancer Research Center, was the first credited with it when he led a team in the 1970s. The basic idea of the procedure is to collect stem cells from the body and insert them into the patient’s blood system with an intravenous line, which would ideally increase the patient’s blood count and reproduce more blood cells. There are two different Hematopoietic Stem Cell Transplants that a patient can receive. The first transplant is the Autologous Hematopoietic Stem, as mentioned above, where the stem cells are received from the same patient. There are three different locations

from which the stem cells can be taken: the bone marrow, the blood stream, or the umbilical cord. The second transplant is Allogeneic Hematopoietic Stem Cell Transplant. Here, the cells are harvested from a related family member or an unrelated donor. According to World Health Organization, there is a 30% chance that a sibling will match another sibling while 1 out 500,000 unrelated individuals will be able to transfer their cells to a patient. Before the cells are given to the patient, they go through a test called Human Leukocyte Antigen (HLA), which identifies a unique protein group located on the surface of the cell membrane. This procedure for a transplant is used for many different cancers, but primarily for leukemia and lymphoma, most effective when the cancer is in remission. Healthy stem cells are inserted into the bone marrow, allowing them to reproduce into healthy cells that the person would not otherwise be able to produce. Although this is a viable option, most patients with cancer undergo chemotherapy or radiation, forms of therapy which target the defective cells that divide rapidly. Consequently, the bone marrow gets severely weakened or destroyed, inhibiting oxygen flow in the body and making

the immune system weak. However, with the operation outlined by an Autologous Hematopoietic Stem Cell Transplant, high doses of chemotherapy can eliminate cancer cells, which can be supplemented by stem cell rescue. In addition, an Autologous Hematopoietic Stem Cell Transplant can be used for immune therapy to help control cancer after the transplant. Finally, it can replace sick or damaged bone marrow that results from disease or therapy. As a result, people will a better chance of survival and more of them will live to fight another day. However, there are a few complications that could arise with this procedure. The inserted stem cells produce white blood cells that could attack the host cells, leading to damaged skin, intestines, and liver. Furthermore, according to St Jude’s Hospital, 83% of children who undergo this process will receive one or multiple additional conditions. However, 13% of children will not have any remaining complications. Even though this procedure can be risky, it is a gamble worth taking. The Autologous Hematopoietic Stem Cell Transplant gives patients an opportunity to live.

9


stilldavid, Flickr Photo Sharing

The Use of Robots to Help Perform Surgery

by Isabel Friesner

DaVinci Robot Control Console

When people imagine robots, they primarily think of creations like WALL-E or Iron Man. To most people, digital creations made by screenwriters seem fictitious and inconceivable. In reality, surgeons use robots to help with operations. Dr. Moll, the founder of a prominent surgical robot company, Intuitive Surgical, developed a technology in which the surgeon can sit and look into an ergonomically designed console displaying a 3D image of the inside of the patient. Nearby, the patient lies on a cart while four interactive robot arms make small precise incisions. A camera is inserted into an incision so the surgeon can see what the robots are doing. The surgeon is still the one performing the surgery but in a mental sense. The surgeon holds on to two handles attached to the console that control the robotic hands. Every time the surgeon needs to change tools, he or she extracts the robot arms and switches the tool before re-entering the patient. Robotic arms are able to work in a more compact area and turn at sharper angles to get in just the right place. As a result, the surgeon has much more control over where their hands are moving. Also, the system is able to reduce the hand tremors or nervousness that the surgeon might have. Furthermore, The surgeon does not have to stand for hours on end. According to the NIH's Medline Plus Encyclopedia, surgeries that use robots are also minimally invasive for the patients. This can result in a reduced hospital stay, less pain or blood loss, and a lower risk of infection. However, there are some disadvantages to using robots in surgeries. Using the machinery requires additional training for doctors, and lack of experience can slow down the procedures. Additionally, the doctor cannot tell how hard the robots are pulling, cutting or stitching, and the doctor is unaware of exactly how the incisions are affecting the patient. If an unexpected complication occurred, the sur-

geon would have less control. Many patients have been asking for robotic surgery with the implication that robots are more efficient and safer. It is not yet known if the use of robotic technology is more beneficial than traditional surgery. Robotic surgery has the potential to make breakthroughs in the field of medicine and is something that will continue to develop and move forward as technology continues to improve. DaVinci – a surgery robot referenced in the article.

Wikimedia Commons, as Printed in Popular Science

10


By Silky M, as published on Wikimedia Commons

Genome Sequencing According to the Human Genome Project, it would take about 9.5 years to read the 3 billion bases in a human’s genome sequence continuously, at the reading rate of 10 bases per second, or 315,360,000 bases/year. The human genome has baffled the greatest scientists of the 21st century, embodying unsolved mysteries of evolution and biology. Understanding it will reveal the genetic triggers that instigate disorders and diseases caused by mutations. Until recently, scientists used computer analysis and biochemical tests to further elucidate the biological functions of the 3 billion chemical bases. Researchers discovered that some DNA base pairs serve as landing spots for proteins that regulate gene activity. Others are converted into strands of RNA, performing functions themselves. In addition, many chemical bases are simply locations where chemical modifications work to silence stretches of the chromosomes. However, researchers could only examine a few thousand nucleotides. Now, next-generation genome sequencing tools have provided raw data on over 95 percent of the bases in an individual’s DNA. They have ushered in a new era of medicine that allows doctors to detect disease, make accurate diagnoses, and customize medical treatments to fit an individual’s own base pair sequence. Sequencing technologies have developed tremendously within the last decade. In 2003, the U.S. government’s Human Genome Project mapped the complete human genome, requiring 13 years and costing about $3 billion. The project identified all 21,000 genes in human DNA and recorded the sequence of the three billion base pairs that comprise it. Today, according to an article from the Nature magazine, sequencing can be done in a stand-alone laboratory, in one day for several thousand dollars. Scientists can detect every deletion or duplication of an entire chromosome or a single base pair, identifying genes for several inherited diseases like Huntington’s and cystic fibrosis. However, the ultimate success of these endeavors depends on geneticists’ ability to analyze and

By Lauren Hooda

communicate what is discovered into better diagnosis, treatment, and prevention of disorders and diseases. The mutational profiles of cancer genomes, especially that of leukemia, are a particular area that has advanced due to next-generation sequencing technologies. Sequencing can now recognize nearly every genetic event present in an individual tumor. Machines have the ability to re-sequence, analyze, and compare the normal and the cancerous genomes, whose chromosomal structures are frequently transformed by amplification, deletion, translocation, and inversion of a chromosomal segment. As a result, researchers at Oxford University have pioneered meticulous genome-wide characterizations of structural variation in tumor genomes using re-sequencing and variant detection methods. Using this knowledge, leukemia researchers at Washington University sequenced RNA, a close chemical cousin to DNA, for clues on the cancerous genes’ effects, as reported by the NY Times. For several leukemia patients, such research found the culprit: a normal gene was supplying excess amounts of a protein that appeared to be inciting the cancer’s growth. Under this new sequencing approach, researchers expect that treatment will be tailored to the individual mutations of the tumor, with drugs that will shut down several key aberrant genes at once. A genomic era of biological and medical studies is developing rapidly, fueled by the emergence of next-generation sequencing technologies. However, the possibility for anyone to have his or her genome sequenced and analyzed – healthy or not – raises questions as to whether or not there is a compelling reason to know if one’s genes make him or her susceptible to a specific disease, as the WSJ reported. On one side are geneticists and doctors who see sequencing as a preventive diagnostic test. On the other side, individuals like Robert Green, director of the MedSeq project and a medical geneticist, argue about the potentially imprecise results. For now, genome sequencing remains in its infancy and is dauntingly complex.

11


The Human Genome

Unveiled By Sam Lurye Throughout the course of human history, DNA, or even genes in general, was a piece of inaccessible knowledge that remained a mystery. But what if humans were able to change and modify their genes, increasing their overall health and ability to resist infection, or even curing genetic diseases such as cancer altogether? With the first, and surprisingly recent, sequencing of a full human genome, researchers have been able to do just this. Since its discovery, DNA Jamlos, Flickr Photo Sharing

12

has been linked to many severe disorders such as cancer, cystic fibrosis, Parkinson’s disease, Huntington’s disease, and countless other conditions. Researchers realized that if the disorder can be eliminated at its source, a defective gene, it can be stopped. Gene therapy, a method through which disease-causing genes can be corrected and replaced, now makes the concept of changing one’s genes a reality. In order to accomplish this, a function-

ing copy of the disease-causing gene is inserted into a specialized virus, called a vector, which carries the gene to its target in the cell. In 2011, as reported by Nathan Seppa in “Science News,” researchers were successfully able to improve the motor capabilities of individuals with advanced Parkinson’s disease. This specific gene therapy focuses on a region of the brain called the subthalamic nucleus. Individuals with Parkinson’s This photo shows an example of Gene therapy using an adenovirus vector.


disease have a shortage of the substance dopamine, which in turn causes the shortage of a neurotransmitter, a chemical that transports information between neurons in the brain, called GABA. Without GABA, there is an increase in activity in that region. This increase in activity actually inhibits signals that help to regulate muscle movement. However, gene therapy supplies a gene that produces an enzyme that stimulates GABA production in the subthalamic nucleus. The production of GABA helps stabilize the signals that control muscle movement in the brain. Thanks to this therapy, 16 individuals with Parkinson’s had a physical 331 pten, Wikimedia Commons

This photo shows a close up of Adenovirus-mediated gene therapy. The adapted DNA enters the nucleus and is added to the current DNA.

“By modifying the vector that is used, researchers have been able to exponentially increase the effectiveness of the virus at delivering the working CFTR gene to its target.”

movement score increase of 23.1%, a figure which was determined by a standard scoring method used to measure the movement of a patient with Parkinson’s disease. Unfortunately, the process of gene therapy is a dangerous one. Due to the requirement that the virus acting as a vector must actually infect the body’s cells, as to allow

functioning DNA to enter the cell, there is the risk of a severe and potentially deadly immune response by the body. When first experimenting with gene therapy in 1999, a teenager named Jesse Gelsinger died after participating in a trial of this process due to an overreaction by his immune system that triggered multiple major organ failures. This danger means that researchers are faced with the challenge of developing a vector that is not only safe for human use, but also an effective treatment. As of now, researchers at the University of California, Berkeley, and the University of Iowa have cured human lung tissue affected by a disorder called cystic fibrosis, which is caused by a mutation in the cystic fibrosis transmembrane conductance regulator (CFTR) gene. Cystic fibrosis affects the body’s mucus membranes, especially in the lungs, which results in difficulty breathing and usually death before the age of 40. By modifying the vector that is used, researchers have been able to exponentially increase the effectiveness of the virus at delivering the working CFTR gene to its target. Even with these advances, gene therapy is not yet practical enough to be a widespread and easily accessible treatment. However, it clearly has great potential and could be the answer to curing even some of the worst diseases imaginable.

13


Turn Living Cells into Computers by Ethan Gelfer

The picture below shows coding for computer programs.

A

t the very core, logic operations run computers. “Yes” and “no” answers are represented by 0’s and 1’s. In order to run programs, the computer processor simply sends electrical impulses that turn certain magnetic “switches” “on” or “off”. Researchers at MIT decided to apply these logic operations into the field of biology, having designed DNA molecules that run logic operations into living cells. The research involves E. coli and circular strings of primitive DNA called plasmids. According to an article titled “How to Turn Living Cells into Computers,” by Roland Pease, in the Journal “Nature,” the researchers created 16 plasmids, corresponding to each of the 16 binary logic functions in computation. According to Pease, “each variant comprises promoter and terminator DNA sequences, which start or halt gene transcription, and an ‘output gene’ that encodes a green fluorescent protein.” This system utilizes recombinase enzymes, which switch promoter and terminator DNA sequences on and off by reordering them. Data has already been rewritten into a DNA code using this technology. The latest work at MIT takes the recombinase work a step further, not only to work with transcribed DNA and RNA, but also to be able to alter the DNA code itself to reflect the results of the operations. Timothy Lu, a researcher involved in the project, gives an explanation: “If the DNA that you alter is a regulatory element, like a promoter

sbengineer, Flickr Photo sharing

sequence or a terminator, then that gives you the ability to control something inside the cell. And it’s that control that gives you the logic.” Using DNA sequences as a hard disk drive for the computer instead of the current silicon drives is the next step from another innovation in the field of synthetic biology. Combining these two properties could lead to computers with much larger data storage capacities and longer hard drive lifetimes. Even within a lineage of bacterial cells, the plasmids will still be passed down through at least 90 cell generations before mutations render it unusable, according to the researchers’ findings. DNA itself is a long lasting organic compound. It does not require any energy to store information. Currently, magnetic tape is the only man-made equivalent method of storage compared to DNA; however, it lasts for only 10 years before breaking down and losing data. DNA can store 99.97% of data for at least 100,000 years. After all, when discoveries of woolly mammoths and Neanderthal humans were made, it was not the cells that were still there. There were just bones, and DNA with most of the information still intact. So in 20 years, the new MacBook Pro might be selling with the slogan, new E. coli. Inside, all new 300 petabytes in a drive the size of a penny.

Background Photo: This picture portrays many series of binary code.

14

mb3rd, Flickr Photo Sharing


Parallel Processing Why the Brain is Faster than Super Computers by Ajay Shyam

Mdd, Wikimedia Commons

Brain Parallel Processing. The picture above is a “neural circuit” in the brain, composed of parallel fibers.

Many people throughout the years have compared the brain to a super-computer, but this argument has often been dismissed as complete balderdash. The brain is actually capable of doing far more than a super-computer, even though people associate the brain with the most simple of activities. But why is this? How can the brain, a collection of flesh with electricity running through it, be on par with a monstrous, data-crunching, equation-smashing machine? All these questions can be answered by a nifty little concept known as parallel processing. Jonathan J. Nassi and Edward M. Callaway, both researchers at the Salk Institute for Biological Studies, described parallel processing as the brain’s ability to interpret and process incoming stimuli of different types (touch, sound, etc.) at the same time. This idea is displayed in the tasks the brain performs. One important function of the brain is interpreting the outside world and environment for a person, allowing him/her to do everyday tasks dismissed as elementary. It allows people to implement logic in decision making. Another role of the brain is registering simultaneously stimuli gathered from the 5 senses. If these stimuli were registered only one at a time, the brain would allow a person to only feel the object first, and then perceive it. Afterward, when the person’s other senses have been shut off, he would be able to hear the sound it makes and so on. However, this is obviously not the case. A person can not only see an object, but also register its texture, the sound it makes when dropped, etc., as well

Hic et nunc, Wikimedia Commons

as register smells, sights and other background information at the same time. In the brain, the stimuli received from receptors undergo an interesting process. These stimuli travel along special channels developed to carry inputs corresponding to a certain sense. The signals then pass into parallel streams in order to give the incoming stimuli an efficient input quality. They are then further analyzed by the cortex so that a person may identify what they are viewing. In theory, an infinite number of stimuli could pass through the spinal cord and into the brain, eventually being registered and stored as memories, making it more efficient than a super-computer. This efficient stream can then be further elaborated upon using different methods, bottom-up and top-down processing. According to Polly Peterson, Ph.D., bottom up processing is essentially where the brain puts together a “whole” image from all the individual parts given by the stimuli, thus creating an mental representation that can then be perceived by the person. Top down processing is where a person’s beliefs and expectations influence the “whole” image produced. Together, with all of these techniques and others, the brain can be regarded as faster than even the best super-computer. Parallel processing is being applied to computers as well, though none have yielded the same speed as the human brain. For now, the human brain, a remarkable organ which sets humans apart from other species and objects, is still the fastest processor of information.

Neural Network in the Brain.

15


A connectome is a comprehensive map of neural connections in the brain, with different firing pathways marked by different colors. foto_mesh, Flickr Photo Sharing

Brain Activity Map By Elizabeth Xiong

The front page of The New York Times announced Obama’s Brain Activity Map (BAM) proposal in February to much fanfare and speculation. Vaguely hinting at support for the initiative in his State of the Union, Obama alluded to scientists “mapping the human brain to unlock the answers to Alzheimer’s,” reaching “a level of research and development not seen since the height of the Space Race.” While the project’s specific details have not yet been released, Obama is expected to publish the budget request for BAM soon, with estimated figures of at least $300 million per year for ten years. The original white paper that Obama’s BAM was based off of called for the need “to record every action potential from every neuron within a circuit.” Published in last summer’s issue of the journal Neuron, this paper indicates that a total of around 100 billion neurons would have to be observed and their action potentials recorded to form a comprehensive brain map. The authors of this article, who include molecular geneticist George M. Church from Harvard University and biologist Rafael Yuste from Columbia University, put forward a plan for a 15-year internationally collaborative effort funded by both federal and private organizations. BAM’s success seems preordained by the similar Human Genome Project, a successful program that for every dollar invested into it, “returned $140 to our economy,” notes Obama. BAM faces several obstacles, however, including the lack of a precise direction and the possibility of diverting funding from other key neuroscience research programs. While BAM will not be a panacea to neuroscience research, it will become a key program to our fundamental understanding of human brains and deserves both the consideration and funding Obama seems prepared to give it. Networks of networks of billions of billions of neurons are the wellsprings of our very perception and action. “Humans are nothing but our brains,” says Yuste, stressing the importance for a map of human brain activity, “Our whole culture, our personality are a result of activity in the brain.” BAM would essentially be

16

a technical development project aimed at devising techniques that would both measure and stimulate neurons with exquisite spatial specificity, with an end goal of exploring every signal sent by every cell and tracking how the resulting data flows through neural networks and is ultimately translated into thoughts, feelings and action. Neural circuit function is emergent: it arises from complex interactions among its constituents, which include trillions of neurons, but neuroscientists have traditionally relied on electrodes that sample brain activity only very sparsely—from one neuron to a few of them within a given region. With current technology one can probe molecular and biophysical aspects of individual neurons and also view the human brain in action with magnetic resonance imaging (MRI), but the mechanisms of perception, cognition, and action remain mysterious because those emerge from the real-time interactions of large sets of neurons in densely interconnected, widespread neural circuits. An MRI is like looking at “a page” of a magazine from “six feet away,” says Brown neuroscientist John Donoghue, another core scientist involved in BAM. Meanwhile, simply dissecting or manipulating single cells or studying several of them interacting at a time is “like looking through a microscope and seeing every ink imperfection in a ‘T.’ Maybe you don’t want to do that if you want to understand what a paragraph says.” What’s missing, says Donoghue, is “that middle level of analysis,” understanding how the brain transforms thought into action. BAM would provide that level of analysis, in part by developing new tools needed to study neural networks to such a detailed extent. BAM would help decrease the prices of neuroscience machinery, bringing down the cost and increasing the quality of the technology, according to George Church, one of the key minds behind BAM and also one of the earliest scientists working for the Human Genome Project (HGP). In fact, HGP brought down the cost of genome sequencing a million-fold, according to Church, and many BAM advocates say that BAM will do for neuroscience what HGP


did for genetics. ing how these illnesses work. By bringing down the But some scientists argue against the com- price of Medicare, BAM would be paying for itself. parison drawn between BAM and HGP. Princeton ge It is a ripe time for human brain mapping; BAM nomicist Leonid Kruglyak says that whereas HGP had follows on the heels of the Human Connectome Project a definite goal—defining three billion base pairs—with and the Human Brain Project. The goal of the Human obvious applications, from molecular medicine to DNA Connectome Project is to build a “network map” that forensics, BAM has no clear real-world significance. will shed light on the anatomical and functional conMoreover, before HGP, researchers were already nectivity within the human brain, as well as to produce searching for disease genes and sequencing the ge- a body of data that will facilitate research into brain disnome in piecemeal, HGP merely efficiently centralized orders such as “dyslexia, autism, Alzheimer’s disease, the process, reducing costs and accelerating the pro- and schizophrenia.” It released its first set of data, the cess towards a precise target. structural and functional images of the brains of 68 Neither protest, however, holds any ground. Un- volunteers, two terabytes of data in total, this March, derstanding the physical representatives of our behav- which, despite the hype surrounding the release, reiors, motivators, competencies, acumen and emotional sulted in nothing because no one has yet figured out quotients is incredibly useful—even Kruglyak acknowl- what to do with all the data. The similarity in the goals edges the “value” of BAM. Although BAM does not at two projects should alert the scientists behind BAM to the moment have any palpable applications, it adds to not make the same mistake and blindly gather data. our knowledge of the brain and moves us closer to un- Luckily BAM differs from the Human Connectome derstanding mental illnesses and other neurological Project in that in seeking to measure the activity of bildisorders. It will also yield cheaplions of individual neurons er treatment and cures for those simultaneously BAM is vastMagnetic Resonance Imaging (MRI) uses a mental illnesses. Additionally, ly more fine-grained than magnetic field and pulses of radio wave energy many scientists are already workthe Human Connectome to make pictures of structures inside. the body. ing towards a brain map already, Project, which looks at the Even though MRI provides a detailed representation of the activity and structure of a brain, a says John Donoghue. In 2006, Micinterconnections between more specific approach is needed to be able to rosoft co-founder Paul Allen fundroughly 500 brain areas and map individual neurons. ed the first complete mapping of is accordingly vastly more the mouse brain. And this March, useful. Gary Marcus says, researchers from Howard Hughes a research psychologist at Medical Institute’s Janelia Farm New York University, BAM Research Campus mapped the is “entirely unprecedented” brain of a zebrafish larva, capturand “absolutely necessary.” ing at least 80 percent of the baby The Human Brain Project fish’s neurons in 1.3 seconds. is a neuroscience research Another obvious probprogram dedicated to using lem arises: where will the money super computers to simulate come from? As unlikely as it is, the brain and better underYuste and other BAM scientists stand its functions, funded hope for new money to be put into by the European Union for a the project, a total of over $3 bilbillion euros for the next delion over the course of ten years, onlinedocturs, Flickr Photo Sharing cade. Without investment in rather than reapportion money BAM, the U.S. will lag behind from other programs. Yuste cites HGP as evidence of other countries, which are quickly realizing the importhe possibility, since Washington sequenced the human tance of neuroscience research. The U.S.’s competitivegenome. With the recent sequester cuts, Yuste’s hope ness will decline in terms of technology and innovation seems improbable, but any money used to fund BAM because a comparatively small investment in relation will be offset in the long run by the decrease in medi- to other countries. cine and technology for the brain. For example, as BAM BAM is an essential program to America’s strengthens our grasp of Alzheimer’s, autism, and a growth as a nation. Economically it will offset its cost host of other brain conditions, it decreases the social by reducing the price in technology and medicine overcost for caring for the people afflicted with those dis- all. Scientifically it will yield infinitely crucial results. eases. Just Alzheimer’s will cost the nation $203 bil- While at the moment it has no significance to a specific lion this year; by 2050 that price will rise to $1.2 trillion, real world problem, BAM works towards amplifying the according to Alzheimer’s Association. BAM would also very foundation of our grasp of neuroscience and is imreduce the amount of care the elderly need. One out of mensely advantageous as such. Despite its setbacks, three seniors die with Alzheimer’s or another form of BAM is a worthwhile endeavor for the White House to dementia, highlighting the importance of understand- undertake.

17


The Brain and Music By Eliza Christman-Cohen

At the Stanford University School of Medicine, Dr. Vinod Menon, associate professor of psychiatry and behavioral sciences and neuroscience, and his team of researchers, as well as researchers at Northwestern University, have discovered new information about how the brain perceives the world through experiments using music. These scientists have been able to show that music stimulates the regions of the brain that are associated with keeping track of information and thinking ahead. Dr. Menon and his Stanford colleagues, used functional magnetic resonance imaging (fMRI) to produce images that show which parts of the brain are employed in different activities. Music was used to observe the brain’s effort to make sense of a constant stream of information being sent to it, which is called event segmentation. The human brain divides information into segments by selecting details about gaps between distinct events. The researchers had ten men and eight women wear noise-cancelling headphones while having their brains scanned in an MRI machine. The subjects were told to listen passively to symphonies composed by William Boyce. Boyce’s music was selected for the study for two main reasons. First, it has a style the subjects would be acquainted with, baroque, but it is not well known, as it was important that the participants not be able to anticipate the music beyond what they were hearing. Second, Boyce’s music has clearly defined transitions between movements that are reasonably short. It has been noted that the apex of brain activity goes on throughout the short periods of silence in the middle of musical movements.

Ali Eminon, Flickr Photo Sharing

18

During the experiment, researchers paid close attention to the ten-second period before and after the transition between movements in the music. They recognized two different neural networks in two separate areas of the brain that were involved in processing the transition between movements. Additionally, the group noticed that the right side of the brain was dramatically more active than the left side during the transitions. The science behind these observations is that the first network, the ventral fronto-temporal network, is activated when the movement ends and the next one starts. When attention is brought to this change, the second network, the dorsal-fronto parietal network, activates and notifies one’s memory. Dr. Nina Kraus of Northwestern University, along with her partners, made another discovery related to music and the brain. They found that neural connections made during musical training “primes” the brain for human communication. For this reason, musicians are more successful than non-musicians at learning to incorporate sound patterns, changes in pitch, and tend to have a better vocabulary and reading ability. Music training also develops the same neural processes that are frequently lacking in people with dyslexia or who have difficulty discerning words from background noise. This research concerning music and its effects on the brain suggests that music should be a more integral part of schools’ curricula. In addition to being a creative outlet, music plays a central role in building the brain’s auditory competence and is thus an important aid in learning.


Hao-Yu Wu et al., Phys.org.

The physical overview and mechanism of the Eulerian Video Magnification process is pictured to the right.

A Digital Microscope: Making the Invisible Visible

Granuflo Lawsuit, Flickr Photo Sharing

By Lily McCarthy Have you ever wondered what a human pulse may look like if distinguishable to the human eye? Recently, the Computer Science and Artificial Intelligence Laboratory (CSAIL) of MIT has developed a novel software program that detects seemingly unperceived motions (such as those made by the heart using a mechanism known as Eulerian Video Magnification (EVM). Originally intended to amplify color changes, the algorithm developed surpassed prior expectations by recording small movements when the method of filtration was adjusted. Employing the technique of motion magnification, the algorithm essentially combines motion and graphics to enlarge the changes in individual pixels over the course of time within a video sequence. The program uses spatial decomposition, or the separation of collected visual components based on frequency bands, and temporal filtering. The image is eventually reconstructed in a way that highlights the selected differences in color or motion. Users can further customize this technique to their needs by adjusting the respective experimental variables. For example, to observe the human pulse, researchers choose a limited band of temporal frequencies near the heart and follow a process of magnification to determine the heart rate. Individuals can then identify the pattern of blood flow throughout the entire body. This technique has the potential to be used as a diagnostic tool for a variety of disorders regarding the heart. Researchers believe that this program may have many other beneficial ramifica-

tions on a myriad of different intellectual fields. In contrast, this program was initially intended to ascertain the vital signs of infants in neonatal intensive care units, such as breathing and sufficient blood circulation. In these care units, medical professionals avoid touching the patients to avoid injury and to eliminate the possibility of spread of disease, thereby requiring sensor free technology. Inspired by this new mechanism, individuals within the healthcare sector are currently investigating the potential benefits the “contactless monitoring� concept may hold for older patients, according to Michael Rubenstein, a graduate student involved in the project. Concurrently, medical researchers hope that EVM will ultimately be implemented as a tool to assist in the laparoscopic visualizations of organs within the body. EVM also is able to detect other subtle motions, such as the swaying of a crane at a construction site, the shift of the human eye in function, and the quivering of a bolt in a building. Thus, the EVM has an immense capacity and may be used to prevent accidents in the engineering and manufacturing fields. It has the potential to create better surveillance and lie-detection systems for top-secret security organizations such as the FBI. A paradoxically microscopic and macroscopic piece of technology, the EVM will continue to impact not only individuals in certain fields, but also the human population as a whole. Individuals will be able to see the world in a new perspective—from the bottom-up.

The heart pulses illustrated by the graph are the same heart pulses detected and magnified by the EVM mechanism to allow human observation of microscopic movement.

19


Defense Advanced Research Projects Agency, as printed in Popular Science

MILITARY TECHNOLOGIES By Will Ellison

Picture of aircraft that hovers like a helicopter but flies like a plane.

Innovative and creative military technologies, that may seem far-out and more applicable to science fiction than to the battlefields of the real world, are being researched and developed by DARPA, the United States Defense Advanced Research Projects Agency. The fact that DARPA, an esteemed institution whose investments produced technologies ranging from the Internet to the Global Positioning System (GPS), is willing to research these technologies indicates that they are at least possible, if not plausible, for military use. One of the most innovative areas that DARPA is looking into is brain technology, participating in a project called the Cognitive Technology Threat Warning System, nicknamed “Luke’s Binoculars,” an allusion to Luke Skywalker from Star Wars. If the system detects certain patterns in brain waves that indicate the brain has subconsciously recognized a threat, the system will alert the soldier immediately instead of waiting for his or her conscious mind to finish analyzing the entire scene. William Schneider, chairman of the Defense Science Board, a panel that counsels the Pentagon’s senior leadership,

20

explicated that neuroscience can provide a window into the minds of terrorists. He illuminated, stating, “By being able to collect and process a lot of information about individuals that can be leveraged with understanding how the brain operates, there may be things we can do that had not heretofore been possible.” Another fascinating program that DARPA is pursuing, in tandem with the Defense Threat Reduction Agency and Department of Homeland Security, is its Integrated Crisis Early Warning System, which, through computer models, intends to predict political instability and detect weapons of mass destruction networks in foreign nations. Although this is by far one of the most ambitious and questionable enterprises, it is undoubtedly venerable. Many of DARPA’s projects involve constructing extraordinary vehicles for military use. The Falcon Hypersonic Technology Vehicle 2 (Falcon HTV-2) program is designed to craft an unmanned rocket-launched aircraft that could fulfill an astonishing goal: fly anywhere in the world in less than an hour. Such a machine would


need to travel at Mach 20, about 13,000 miles per hour, and endure temperatures in excess of 3,500o Fahrenheit. The HTV-2 program has met some success, as DARPA has managed to field and fly two HTV2s in 2010 and 2011 respectively. However, both HTV-2s crashed after minutes of flight. Project manager Major Chris Schulz articulated the challenges still hindering the progress of the task. “We do not yet know how to achieve the desired control during the aerodynamic phase of flight,” he verbalized, “It’s vexing. I’m confident there is a solution. We have to find it.” DARPA, along with six other contractors (including Lockheed Martin and Carnegie Mellon University (CMU)), is also working on the Transformer program, which, remarkably, anticipates constructing a vertical take-off-and-landing vehicle that could carry four people. Sanjiv Singh, a CMU research robotics professor, elucidated, “In practical terms…the vehicle will need to be able to fly itself, or to fly with only minimal input from the operator. And this means that the vehicle has to be continuously aware of its environment and be able to automatically react in response to what it perceives.” Another vehicle being developed by DARPA is the Disc-Rotor Compound Helicopter of DARPA and Boeing, a flying machine that aims to amalgamate the best attributes of a helicopter and an airplane. Its Disc-Rotor program would enable the machine to effortlessly vacillate between hovering like a helicopter and flying like a plane. The rotor would be composed of a central disc with rotor blades extending from it, permitting the Compound Helicopter to hover. These blades could then retract into the disc, allowing the helicopter to fly like a plane using engines housed behind each wing. Besides vehicles, DARPA is pursuing ventures

aimed at developing small robots with myriad functions. The ChemBots program, of DARPA and technology company iRobot, is working on constructing soft, flexible robots that could warp their bodies in order to move through miniscule openings and perform covert missions. They would be composed of transitional or “jamming” material, with the properties of both a solid and a liquid, producing the desired elasticity. In addition, the two corporations are attempting to develop Local Area Network Droids (LANdroids.) LANdroids would be one-pound, pocket-sized, highly mobile robots capable of providing soldiers credible communications in urban areas by acting as nodes in wireless communications network. They would be equipped with flipper mechanisms for obstacle climbing. DARPA also has a Nano Air Vehicle (NAV) program, striving to produce miniature, ultra-light air vehicles for military missions. The program has already fabricated a notable prototype, a hummingbird-like robot that can hover and fly. Furthermore, Boston Dynamics and DARPA are working on engineering a “pack mule” robot that would carry gear, such as heavy backpacks, weapons, food, and ammunition, which normally slow down ground forces. The mule would accompany infantry, negotiate obstacles, such as rocks and divots without assistance from soldiers, and interpret verbal and visual commands. DARPA’s Phoenix Program intends to design small “tender” satellites that could repair damaged satellites within space and extract valuable components from the satellites for recycling. Currently, since communications satellites orbit more than 20,000 miles above Earth, a broken satellite usually has to be replaced by launching a new satellite, wasting the many parts of the damaged satellite that may still be functional, such as antennae and solar arrays. The Phoenix Program could remedy that. However, before tender satellites can become conceivable, new robotics, remote-imaging systems, and robotic tools must be created In addition, DARPA, in coordination with aerospace company AeroVironment, is working on developing Shrike, a vertical take-off-and-landing unmanned aircraft “small enough to be carried in a backpack.” The Shrike, which would be employed for military surveillance, would have four rotors and a high-resolution camera that broadcasts real-time video. These projects aim to protect US national security by aiding the military in maintaining technological superiority. Hopefully, those technologies that are ultimately successful will generate enduring revolutionary change.

21


Google Glass has a revolutionary lightweight design and comes in many different colors, including blue.

Google

Stuck in Customs, Flickr Photo Sharing

Gadget-tech.org

Google Glass is composed of a tiny computing mechanism attached to the lens of the glasses.

By Jason Ginsberg

Ars Electronica, Flickr Photo Sharing

H

ead-mounted-displays (HMDs) were first made famous in James Cameron’s The Terminator, in which the eponymous character tracked down California resident Sara Connor by scanning crowds with a computer connected to his vision. Fans of The Terminator and science fiction alike are now rejoicing, as Google’s X Lab, specializing in conceptual technologies, has announced Google Glass. Worn like normal glasses, Google Glass, an eyeglass computer that features a heads-up display, adds graphics such as maps for navigation and video feeds for video conferencing to the user’s normal vision. Users can also send text messages, take photos and films, check the weather, receive flight details, translate text, and perform Google searches through Glass. These features are accessed by voice commands, such as “okay, Glass, record a video,” spoken while the user tilts his or her head up or presses a special button on Glass’ frame. The device can also connect to the Internet through Wi-Fi or the 3G in a smartphone. Steve Lee, product director of Glass, explains the reasoning behind Google Glass, “A big problem right now is the distractions that technology causes. If you’re a parent — let’s say your child’s performance, watching them do a soccer game or a musical. Often friends will be holding a camera to capture that moment. Guess what? It’s gone. You just missed that amazing game.” Google Glass answers this problem by seamlessly merging the physical and digital world, bringing technology closer to our senses and keeping the use in the

22

moment. The most recent pictures of Glass depict a design that is undeniably bizarre. Though this design may turn away the more fashion-conscious, demand for the product is already high, marked by more than 20 million views on the promotional YouTube video for Google Glass. So far Google has officially announced five colors for Glass, shale (gray), tangerine (orange), charcoal (black), cotton (white), and sky (blue). According to the New York Times, Google is also negotiating a deal with eyeglass retailer Warby Parker to sell their frames and lenses with Google Glass attached to them. For the prescription-glasses wearers out there, Google Glass can be taken apart, a feature that allows different lenses to be attached to its frame. Currently Glass is still in beta testing, and for $1,500, the public can currently obtain a pair in Glass’ Explorer Program. Even though no official release date for Glass has been given, it will be manufactured in the United States and is expected to reach the masses by late 2013 or early 2014. Google has not mentioned the price for Glass, but technology and eyeglass experts are guessing the price will fall in the range of $200 to $600. Glass is more than just another consumer product; it is the beginning of a technological revolution. While smartphones make the Internet a distraction from the world, Glass makes the Internet a part of the world, enriching human interaction and experience.


Thorium: The Element that Will Save Humanity

By Aditya Ram

By Spacecaveman, Flickr Photo Sharing

We are in the middle of an energy crisis. Our usage of gasoline is spewing pollutants into the atmosphere causing catastrophic global warming. However, we have not yet found a solution. “Alternate energy” technologies, often heralded as this solution, are overly expensive. According to Renewable Green Energy Power, an average solar panel costs $1750–$2500 per Kilowatt. According to Solar Energy USA, the profits from the average set of panels only start rolling in after 8 to 10 years of use. Solar panels are relativity inefficient compared to gas, producing 200 watts per day and 2.7 million watts per day respectively. The same problems plague wind, hydro, and geothermal energies: high costs and low efficiency. Nuclear power, however, produces a lot of energy. According to the Energy Information Administration, the Palo Verde plant in Arizona has three reactors that produce about 3,937 Megawatts per day. The problem with nuclear power is that its reactants, uranium and plutonium, are highly dangerous and radioactive. The tragedies of Chernobyl and Fukushima are testaments to the harm nuclear power plants may cause. In the Fukushima Daiichi disaster, 488,000 people were evacuated from the area surrounding the plant. The costs of recovery were massive. What humanity needs is a way to satisfy our massive energy needs, an effective, safe, and efficient Science and Technology Facilities Council-UK, as printed in Popular

This is a picture of a particle accelerator that produces nuclear energy and is based on thorium, which is remarkably more safe and plentiful than other radioactive elements.

source. Enter Thorium. According to the World Nuclear Association, thorium is 3 times more abundant than uranium. Approximately 5,385,000 tons of thorium exists in 27 known isotopes . Due to thorium’s more complex structure, it is possible to design a breeder reactor, which will continue to produce energy, theoretically forever, as it produces more fissile material to counteract radioactive decay. While many existing types of reactors can be adapted for thorium, few have been successful and long lasting. A High-Temperature Gas-Cooled Reactor (HTR) in Peach Bottom, USA, used thorium in concert with highly enriched uranium to produce energy from 1967 to 1974. In Fort St. Vrain, Colorado, another HTR mixed 25 tons of thorium and highly enriched uranium. The plant ran from 1976 to 1989. The problem with both of these reactors was that they were not breeder reactors, so they ran out of fissile material. Another reactor in Shippingport, USA used thorium in breeder cells, funneling the fissile material into the reactor core. However, it was a light water reactor. Due to the fact that highly enriched uranium was not used, the plant could only produce a bare minimum of electricity. It was shut down in 1982. The problem with all the existing thorium reactors was that they all used thorium in concert with uranium. As a result, the risk factor is the same as with a conventional nuclear reactor. Pure thorium emits only minor amounts of radiation, though not enough to penetrate skin, and even highly concentrated amounts of thorium do not cause massive damage. The concentration of thorium used in a nuclear reactor would not be enough to lead to increased chances of cancer. The thorium could be dissipated into the atmosphere with little environmental damage, thus yielding a very low risk factor. Thorium is an ideal source of clean energy in today’s world. Harnessing it may become man’s next major scientific advance, granting us the ability to innovate without worrying about energy supplies, global warming, or pollution. The possibilities are nearly endless.

23


Hydroelectric Power By Veer Sobti

epSos.de, Flickr Photo Sharing

The colorful wheel generators above in wind turbine farms have distinct colors for every major company. The windmills and wind turbines are often constructed with the hydroelectric transfer of energy. The wind power is free and incredibly profitable. In the background, many rows of solar panels on top of the Stadium Parking Structure at ASU.

U

sing water to generate power and make tasks less arduous is an ancient process. It dates back to some of the earliest civilizations on the planet where it was used for agricultural purposes and to grind wheat. However, in more recent times, hydroelectric power has been used for so much more. Hydroelectric power works by placing generators near a water source like a dam, river, or lake. Turbine blades that are connected to these generators are placed inside the water. The water’s flow causes these turbine blades to spin, generating electricity that is collected by the generator. This energy is then sent to buildings to power lights and appliances and to factories to power machines. Despite the simplicity of this process, hydropower only constitutes about 7% of the total power produced by the US, trailing fossil fuel and nuclear power by a large margin. It is for this reason that scientists are working very hard to improve hydroelectric technology. Power from natural resources and nuclear power utilizes valuable and limited natural resources, creates a lot of pollution, and causes waste-disposal problems. Therefore an increase in hydroelectric power could become a promising eco-friendly movement. The advantages of hydroelectric power are that the water needed to generate the electricity is provided free by nature, the maintenance of the energy plants is cheap, the technology is reliable, and the power

source is renewable due to the rain. Most importantly no fuel is burned, thereby reducing greenhouse gas emissions. Another advantage to hydropower is that it is very cheap, costing about 3 to 5 cents per kilowatthour. Therefore it can also be viewed as a way to save money. Despite these advantages, there are also some drawbacks to hydroelectric plants. Hydroelectric plants require a lot of funding to get them started, they are entirely dependent on precipitation to keep the water amount high, they can interfere with fish populations, they may change water quality, and they may require the relocation of local inhabitants. Although these drawbacks can be serious, they generally do not happen. In the future, the US along with many other countries is looking to increase the use of hydroelectricity in an attempt to lower green house gas emissions. It is predicted that production of hydroelectric power will increase 3.1% each year for the next 25 years. The plan is to expand the use of hydroelectric power, reducing the demand for fossil fuel. For example, using hydroelectric energy to power your car, charge your phones, and to power large cities and towns are all innovations for the future. These are only a few of the wonderful possibilities that hydroelectric power is capable of doing while creating a greener environment. Jackdog2508, Flickr Photo Sharing

24


toneynetone, Flickr Photo Sharing

Chelyabinsk Meteor

by Kasia Kalinowska åå

The photo above displays the meteor that flew across the atmosphere of the Chelyabinsk region in Russia.

It was an average Friday morning for the citizens of Chelyabinsk, Russia on February 15, 2013. Children were settling into their classrooms while adults began their workday throughout the city. Suddenly, at 9:20 a.m., a bright fireball streaked across the sky, blinding pedestrians with its light. Then, the windows blew out. The energy released was estimated to be equivalent to that of 500 kilotons of TNT, or 4.186 pegajoules, making the explosion 20 to 30 times more powerful than the atomic explosions at Hiroshima and Nagasaki. Fortunately, no deaths were reported. However, about 1,200 people were injured, including 200 children, by flying glass shards. A 10-ton burning meteor was the culprit of the disastrous cataclysm. It was 10 feet in diameter, and it had been hurtling towards Earth at a speed of about 10 to 12 miles per second, according to the Russian Academy of Sciences. Upon entering the Earth’s lower atmosphere at an altitude of 20 to 30 miles, the meteor exploded into meteorites, smaller meteor fragments that were scattered throughout the Chelyabinsk region. The result of the strike was a shock wave that rippled across the area, producing structural damage. According to NASA, a meteor of this size strikes Earth about every hundred years. Although researchers were not surprised by the strike, they were shocked upon finding that the body was made up of dense iron. Researchers are currently comparing the impact of the Chelyabinsk meteor to that of the biggest

meteor impact ever recorded, the “Tunguska Event” of 1908, which also occurred in Siberia. The force of the meteor, which burned up about 5 miles above the Earth’s surface, leveled about 80 million trees over an area of approximately 800 square miles. The Tunguska explosion was estimated to be as powerful as a medium sized hydrogen bomb and about several hundred times more powerful than the atomic bombings at Hiroshima and Nagasaki. The strike sparked a debate not only in the scientific community, but also in the political community. Some politicians are now vowing to take drastic measures to prevent or adequately prepare for such events in the future. For example, the Russian Federation Council, comprised of officials from Russia’s space agency, nuclear agency, and astronomical institute, is now pushing for an investment in programs that would produce nuclear weapons designed to deflect or destroy future approaching asteroids. Despite NASA’s statement that asteroids will not be a threat, the Federation Council has used the Apophis asteroid’s predicted near miss of the Earth to advocate for the establishment of preventive measures by 2018. Meanwhile, to prepare for future impacts, NASA is developing ATLAS, the Asteroid Terrestrial-Impact Last Alert System, which would provide about a day’s warning and an impact location estimate for incidents similar to the Chelyabinsk incident. With these new developments, future cataclysmic events will hopefully be avoided, preventing unnecessary damage and loss of life.

Paul Pescov, Flickr Photo Sharing

The explosion caused the windows of the buildings to shatter.

25


Cassandi, Flickr Photo Sharing

Simultaneity & Synchronicity

By Abigail Zuckerman

The position of a moving person relative to the position of a still parking meter illustrates the concept of relativity of simultaneity.

The notion of “relativity” may seem complex and unrelated to everyday experiences but, in reality, it can be broken down and understood. As people move throughout the day, they change their frames of reference constantly as they change their speed, thereby, altering their perceptions of the surroundings. Galileo was one of the first to formulate the idea of relative motion. Simply described, spatial relativity, also referred to as Galilean relativity, occurs when an observer in motion perceives events and timing differently than a stationary observer does because they are in different spatial coordinate systems, or “frames of reference.” Spatial relativity is often confused with Einstein’s similar theories of relativity, which differ from classical relativity, because of Einstein’s definition of time. Previously, people believed time to be absolute, but it was discovered that the speed of light could not be constant if that were the case. Einstein solved this dilemma in 1905 when he published his paper, On the Electrodynamics of Moving Bodies, in which he developed his Special Theory of Relativity. Special relativity is, surprisingly, based on only two postulates: the laws of physics are the same in all inertial (non-accelerating) reference frames, and the speed of light in free space is constant. If a car crash happens in Manhattan and another crash happens in the Bronx, is it possible to know whether they occurred at the same time? Using the concept of the relativity of simultaneity, Einstein might say that the two crashes occur simultaneously in certain frames of reference. However, in other frames, the car crash in the Bronx might occur first, or the car crash in Manhattan might occur first. The relativity of simultaneity in physics is the concept that simultaneity – whether two events take place at the same time – is not definite and instead depends on the observer’s frame of reference. To begin an explanation of simultaneity and synchronicity, the concept of time must first be understood. Rate is described as a function of distance over

26

time, but this description has no value until Einstein’s description of time can clearly be understood. All of the discernments involving time are, in fact, judgments of events that are simultaneous in our frame of reference. To better depict this idea, Einstein gave the example that if someone were to say “[the] train arrives here at 7 o’clock,” what would really be meant was: “The pointing of the small hand of my watch to 7 and the arrival of the train are simultaneous events.” It might appear, at first glance, that substituting “time” for the position of the hands of a watch would remedy any problems concerning the definition of time, but this will not always be adequate. If a measurement of time in the immediate area surrounding the watch is all that is required, then this definition is indeed satisfactory; however, it will not suffice if time is to be measured between two places, or at any location separated from the watch. Using the speed of light, Einstein determined the definition of synchronization between two clocks in order to establish a common time. According to On the Electrodynamics of Moving Bodies, if a clock is stationed at a point A in space, an observer at A can determine the time in the immediate proximity of A. If another clock is stationed at point B in space, then the same will be true for an observer stationed at B. Assume that between A and B there is only empty space. Thus far, a time for A and a time for B have been established, but no common or shared time between the two points has been defined. If a beam of light originates at point A, at a “time A” measured on the A clock (ta), it will arrive at point B at a “time B” measured on the B clock (tb). As the beam of light arrives at point B, it is reflected back to point A and will arrive at another time measured on the A clock (t’a). According to Einstein, clocks A and B are defined as synchronized if tb - ta = t’a - tb. From this, the following definition for the speed of light can be determined: 2AB/(t’a - ta) = c. The easiest way of explaining simultaneity’s complexities uses a theoretical example. This exact


example is similar to an example Einstein himself created: A speeding train car is passing through a station. One observer is standing in the middle of the train, while another is standing on the platform. At the moment that the two observers pass one another, a flash of light is set off in the center of the train. Assuming, for the sake of the example, that the two observers can see the exact instant at which the light comes into contact with the front and back of the train, to the observer on the train, both ends of the train car are equidistant from the source of light, so the light will reach the front and back of the train car at the same time, or “simultaneously.” To the observer on the train, the back of the train will appear to “catch up” with the source of the light, so the light will appear to reach the back of the train before it reaches the front. Thus, in different frames of reference, i.e. the two different rates at which the observers are moving, the moment of contact between the light and the train car may or may not appear to occur simultaneously. These concepts are only an initial window into Einstein’s Special Theory of Relativity. He also created space-time by adding time as a component to the three-dimensional coordinate plane, thereby creating infinite four-dimensional space-time reference frames. This concept is somewhat similar to the infinite three-dimensional reference frames in Galilean relativity. Made palpable by his tremendous contribution to physics, Einstein had ideas that were truly revolutionary.

Army 1987, Wikimedia Commons

Thre concept of relativity of simultaneity can be organized in mathematical terms, as pictured above.

Another example of relativity of simultaneity is a spiral staircase. The higher a person is on the stair case, the slower time moves.

Zero One, Flickr Photo Sharing

27


Planck Satellite

By Dan Yahalomi

Europeanspaceagency

Planck has discovered a bridge of hot gas that connects galaxy clusters Abell 399 (bottom) and Abell 401 (top). The galaxies are about a billion light-years from Earth and the gas bridge extends approximately 10 million light years.

Hovering about 100 million miles away from Earth since 2009 to 2013, the Planck Satellite made many key discoveries about the history of our universe. The satellite was able to find formerly undiscovered cosmic realities because it operated on a longer wavelength than previous models. Planck has opened up a new chunk of frequency space – a new window that has never been explored. Opening up a new frequency window is roughly the equivalent of finding a new continent and the kind of objects we are finding can , in principle, be radically different. Think for a moment about the discovery of Australia. Who would’ve expected kangaroos? One key discovery of Planck was its exact measurements of temperature fluctuations throughout the visible universe due to radiation, often called the cosmic microwave background, or CMB. According to the Big Bang Theory, the Universe was once incredibly dense and hot. Through 13.7 billion years of expansion and cooling, there are still remnants of radiation in the form of microwaves, which provide an almost universal constant 2.7 Kelvin increase in temperature. The existence

28

of the CMB, first observed by Arno Penzias and Robert Wilson in 1964, is key to scientists’ belief in the Big Bang Theory because the Big Bang Theory is the only theory of the origin of the universe that accounts for this excess radiation. In addition, using the data from the Planck Satellite, cosmologists have compiled the first almost complete sky image of the distribution of dark matter in the universe. In order to do so the satellite observed the slight change in the path of photons due to the gravitational potential of the cosmic structures, composed largely of dark matter. Planck also has been able to discover certain stellar nurseries that had been previously too cold to be seen by other satellites. Planck, because it is able to work at longer wavelengths in the infrared, is seeing this very cold matter. There are literally thousands of these stellar nurseries visible in Planck data; one of the most interesting discoveries of the early-release catalogue. The universe is truly an incredible, changing, and expanding object and amazing projects like the Plank Satellite are opening the door to history and reality.


PolicyMic

SPACE DEBRIS By Jeffrey Weiner

S

ince ancient times, the beauty and infinite vastness of space have inspired people to explore its mysteries. More recently, scientific innovations have given humans opportunities to enter space and study it from within. In 1957, Sputnik I became the world’s first artificial satellite. Only twelve years later, Neil Armstrong landed on the moon. Since the Sputnik I, an additional 2,500 satellites have been launched into space. However, the very objects that humans have sent into space have the potential to set back space exploration for multiple generations. As the number of objects in space has increased, so has the amount of space debris, useless objects in orbit around Earth. Since the orbits of space junk cross those of spacecraft, there is a risk that the debris may collide with the spacecraft. Therefore, as more objects are sent into space, the

possibility of a collision increases. There exists a thres hold density at which debris is being created more quickly than it can be removed. Beyond this critical density, an out of control chain reaction can occur in which debris destroys all the objects in orbit, including important satellites. Recent events have magnified the space debris threat and increased the risk of space debris reaching its critical density. A Chinese anti-satellite test in 2007 resulted in a massive debris cloud of approximately 2500 fragments. The cloud, 2,292 miles wide, covers all of low Earth orbit and “will be very long-lived” according to NASA’s Orbital Debris Program Office. In 2009, an explosion between Russia’s Cosmos satellite collided and Russia’s Iridium spacecraft created over 500 pieces of space junk. The incident has been referred to as “unprecedented” because nev-

er before had two fully functional spacecraft smashed into each other. Many groups work hard to lessen the threat of space debris. The United Nations has developed a space debris mitigation policy. Horace Mann student Adam Zachar (12) has written a paper titled “A Study of the Most Efficient Altitude and Mass of Space Debris to Target to Maximize Mass Removed from Operational Orbit,” in which he discusses which mass and altitude debris to target for optimal efficiency of mitigation. On March 20, a Senate Committee met to discuss possible ways to mitigate threats of collision from satellites near earth. Space debris is certainly a serious and dangerous threat, which if not dealt with, can prevent space exploration for years. However, mitigation efforts can now effectively handle the problem.

29


Brief History of Life Diversification and Oxygen Liberation on Earth Ricardo Fernandez

Nat Tarbox, Flickr Photo Sharing

The picture above captures a mix a moss and cyanobacteria, a blue-green algae.

The evolution of complex eukaryotic cell life after the Late Heavy Bombardment (LHB) makes you ponder the extreme depletion of microbial life, caused by the LHB, on the earth’s surface. The LHB is a hypothetical event in which multiple cataclysmic meteors struck the Moon, Earth, Mercury, and Venus about 4.1 to 3.8 billions years ago. A large amount of microbes would have been wiped off the earth due to sterilization from the high temperatures of the meteor strikes. Also, complex eukaryotic cell life would have been almost unsustainable considering the out-gassing, or release, of toxic volcanic gas into the atmosphere that already had little to no oxygen. Some process must have occurred with the microbial life form in order to allow for the re-oxygenation of earth, which would have lead to the evolution of more complex eukaryotic life. Undoubtedly, some thermophilic microbes survived the great heat of the meteors and were able to incubate using an energy source. These thermophilic microbes were located underwater and were provided the energy and resources they

30

needed through the carbon dioxide and various metals provided by the hydrothermal vents underwater after the LHB. However, with the emergence of early photosynthesis, these microbes were liberated from the hydrothermal environment and began to make use of not only carbon dioxide but also the sun, an extremely abundant source of energy. Eventually, from the combination of carbon dioxide, water, and sunlight, oxygen-producing photosynthesis became a significant function in these microbes. As a result, free oxygen began to slowly spread around the earth. The microbes, in colonies and multi-layered sheets, formed the various types of microbial mats that slowly became more complicated and allowed for the more efficient and abundant re-oxygenation of earth. Stromatolites slowly evolved from the compacted layers of sea sediment and microbial mats. Bacteria located inside these stromatolites are prokaryotic cyanobacteria. The stromatolites were cyanobacterial structures that allowed for the production of oxygen through photosynthesis in the cyanobacteria. In the present, cyanobacteria account


Cyano CBF, Flickr Photo Sharing

The picture above contains filamentous cyanobacteria, the blue-green algae, under a microscope.

for about 20-30% of the photosynthetic activity on Earth. Clearly their presence was necessary for various life processes; however, there were still many complications until re-oxygenation process was complete. One of the problems encountered were the premature microbes. When these microbes are presented to oxygen, they suffer from toxicity or death. The sudden abundance of free-oxygen, ironically, caused the death of many microbes. However, for oxygen tolerating microbes, their metabolic rates were greatly increased due to the oxygen proliferation. To put the importance of oxygen in perspective, anaerobic respiration, absence of free oxygen, only produces a net yield of approximately 2 ATP per molecule of glucose, while aerobic respiration, presence of free oxygen, produces a net yield of approximately 30-32 ATP per molecule of glucose. The introduction of oxygen to the atmosphere is necessary for the efficiency and abundance of many biological processes. The increase in ATP production of aerobic organisms also allowed for the support and evolution of much larger and complex eukaryotic organisms. More evidence of the diversification of life on earth is prevalent from the Cambrian Substrate Revolution. Many bottom-dwelling animals and

microbes simply grazed on the microbial mats that lined the floor of the sea. After oxygen was liberated from the microbial mats, there were two layers of sea floor separated by the microbial mats: the top, oxygen rich layer, and bottom, anoxic hydrogen sulfide rich layer. Therefore, as organisms began to burrow vertically under the microbial mat, oxygen was exposed to the sulfate-reducing bacteria that released hydrogen sulfide. The oxygen killed most of these sulfate-reducing bacteria, forcing the bacteria and their hydrogen sulfide emissions to the lower layers of the seas. Therefore, oxygen occupied a deeper and larger part of the sea which allowed for the inhabitation of a wider range of organisms in the now oxygen rich ocean. This revolution was essentially one of the last steps in the diversification of life in the recently oxygenated earth and growing sea floor. The introduction of oxygen into the atmosphere of the earth was essential for the development and evolution of more complex eukaryotic organisms after the LHB could have potentially wiped out all life. One thing is for sure: there seems to be a convenient linear/coincidental path of evolution and growth regarding the thermophiles, microbial mats, cyanobacteria, photosynthesis and oxygenation of earth.

31


The Model City By Lauren Futter

Isaac.borrego, Flickr Phoot Sharing

Skyline of Tucson, Arizona shows the vast movement of population towards cities and the electrical cost of one of those cities at dusk.

As people all over the world become more conncted, it feels as though the world revolves around economic epicenters such as New York City, London, and Tokyo. However, this feeling of change is occurring physically as well as through internet and phone connections. Over half the world’s population lives in cities, and by 2050, over two-thirds of the world’s population is expected to live in an urban environment. While the migration of more people to cities has lead to increases in GDP and advancements in technology, it has also reawakened traditional problems of urbanization such as pollution and poor water supply. While society continues to move towards becoming more efficient and faster, these changes have often led to these environmental issues. In order to counteract these problems, urban planners have started to create designs for eco-cities or cities that are ecologically sustainable. Through efforts to create these eco-friendly cities, urban planners can reduce problems such as poor water supply and pollution. With the increase of floods, tsunamis, and hurricanes in the past couple of years, it seems impossible that there could be a water shortage. Increasingly, cities such as Las Vegas and Los Angeles have faced detrimental droughts that have compromised the cities’ ability to pipe water to their residents. Droughts are very difficult to prevent and nearly impossible to stop through manmade means. However, through the

32

use of eco-friendly urban planning, the effects of it can be ameliorated. EcoCity planners have proposed several measures in order to mitigate some of the effects of drought such as low-flow fixtures and rainwater harvesting. These fixtures can be used in shower heads and toilets to reduce the amount of water used while keeping the water pressure constant. In fact, compared to a standard toilet, which uses 3.5 gallons of water, a low-flow fixture only uses 1.6 gallons of water per use, allowing cities to save water for drinking and for planting. In addition to these fixtures, countries such as India have begun to build rain-harvesting systems. Rain harvesting is the process of capturing rainwater and pumping it into an adjacent building instead of allowing it to flood the streets and eventually become ground water. In order to make this system, city planners can suggest buildings with slanted roofs that cause rainwater to run off into a gutter attached to the side of the building. The water then goes to an underground containment center where the water is purified and pumped into the building. Rainwater harvesting further prevents wasted water, especially in the event of a drought, and can allow more people access to water especially in developing urban environments. In cities such as Dongtan, China and Masdar City,


Abu Dhabi, there is little to no carbon footprint. These municipalities exist by relying solely on solar, wind, and biomass power. Although, all metropolitan areas should aspire to be just as green, in most cases, they too reliant on carbon-based fuels such as fossil fuels to change now. In order to reduce that carbon footprint, urban planners have suggested the creation of urban forests. Urban forests are concentrated areas of plant life around a metropolitan environment. Through the creation of these forests, cities can reduce the amount of smog because trees use carbon dioxide as part of the photosynthesis process, which has the potential to solve a large part of the carbon emission problem. Buildings account for 45% of carbon emission in cities according to the Ashden Awards, an institution which gives awards to innovative scientists. In order to assuage this problem, scientists have suggested insulating

buildings more in order to prevent the usage of heaters in buildings. Adding insulation is a simple yet effective way to reduce carbon emissions. In addition, scientists suggest adding solar panels to buildings in cities. While they are not as efficient as using gasoline to generate energy, they are one more step towards increasing urban energy efficiency. By using these two solutions, cities can greatly reduce their carbon impact. Our current model of cities is outdated. In order to accommodate a growing population, we must act to insure that cities are sustainable and can last. Through insulation, solar panels, rain harvesting, and low-flow fixtures, we can ensure that there is a better tomorrow for people all across the planet.

imredubai, Flickr Photo Sharing

The construction of Masdar City shows rows upon rows of solar paneling being laid out.

33


Daneil1977:, Flickr Photo Sharing

The Big...

Leap

The original Giant Leap that must be recreated again for the modern scientific revolution.

By Josh Siegel

Close to half a century ago, President John F. Kennedy declared, “We choose to go to the moon in this decade.” In order to put a man on the moon, mankind had to have a scientific revolution. We had one. Scientists heavily funded research and emphasized teaching evidence-based

science in schools. However, nearly fifty years later now, the Space Race is over and that scientific revolution has ended. But does it ever need to end? Scientific innovation has driven our country, from the light bulb to the Internet. We split the atom and

“Science funding will be cut by at least 50 billion dollars over the next five years, according to the AAAS.” 34

made the airplane. However, if we want to continue making cutting edge scientific discoveries, we must have a novel scientific revolution. We need a second Giant Leap for human kind, a leap that can only be launched by a new investment in science. America must have a cultural shift; more people must accept evidence-based science. According to the American Association for the Advancement of Science (AAAS), in “the past two years, federal nondefense [research and development] has declined by 5 percent, after a largely static decade.” These grim statistics


will look even worse if the upcoming budget sequester happens, cutting science funding by at least 50 billion dollars over the next five years, according to the AAAS. Instead of slashing research funding, the American government needs to make a massive new investment in science. President Barack Obama said in his State of the Union address, “Now is the time to reach a level of research and development not seen since the height of the Space Race.” The student movement for a second Giant Leap is calling for one trillion dollars in funding over the next decade, and it will not be stopped. Some question whether it is fiscally responsible to spend one trillion dollars on science while the American working-class struggles to pay off existing debt. Funding science is not only fiscally responsible but also a moral imperative. As President Obama noted in the State of the Union, every dollar that was spent on the Human Genome Project has created $140 for our economy. Scholars estimate the return on investment in scientific innovation to range from 30% to over 100%. When Congress crafts a budget, they should recognize in it a return on scientific investment. In addition to increased funding, America’s view of science must undergo a revolution. Louisiana and Tennessee are teaching unscientific “alternatives” to evolution, the origin of the Earth, and climate change, a practice allowed by state law. Other states may soon follow suit. Even politicians like Rep.

Paul Broun, a member of the U.S. House of Representatives’ Science Committee, recently called evolution, embryology and the Big Bang Theory “lies straight from the pit of hell.” This American perception of science must change fundamentally and legislation promoting science denial must end. Obama agrees,calling on Americans to “believe in the overwhelm-

ing judgment of science.” The American people will face unprecedented challenges in the coming years, from climate change to meteorites like the one that recently exploded over Russia. Science is what will allow mankind to overcome these challenges and create a brighter future of new technology being harnessed on a massive scale.

Widdowquinn, Flickr Photo Sharing

The Book of Life: The mapped out Human Genome Project can show us the benefits of a scientific revolution.

35


The Mona Lisa

Werwil, Wikimedia Commons

By Sam Stern The Mona Lisa is a complex, sophisticated, and inscru- ject. As Vinceti said, “Once we identify the remains, we table painting. Numerous art historians have attempted can reconstruct the face, with a margin of error of 2 to 8 to analyze the portrait but none have ever discovered its percent.” The hope is that this computer generated, or secrets . Despite all the mystery found in it, one question sculpted, face will yield a visage similar to that of the is always asked among lay people and experts alike—who Mona Lisa. This method of forensic facial reconstruction is this mysterious person? is based on the structure of the bones. For example, the Unlike da Vinci’s other portraits, which always con- corners of our lips are where the canine teeth are. Given tained some clue as to who the subject was, the Mona this information, mouth width can be determined. SimLisa is unidentifiable. For example, in a portrait of Cecilia ilarly, the wearing down on the enamel of the teeth can Gallerani, she is seated with an ermine, which was her indicate the thickness of the lips. The location of the eyes, husband’s emblem. In the Mona Lisa, however, da Vinci the shape of the nose, and the broadness of the forehead offers no clues whatsoever. are other features that can also be constructed. While there are a few theories on who the subject of However, according to an article in the Huffington Post, the famed painting is, the most widely accepted view/ “Researchers have used medical scanning technology to subject is Lisa Gherardini, Cantus, Wikimedia Commons create a three-dimensional image of a the wife of a wealthy silk merliving person’s skull, but not even peochant, as discovered by reple’s own family members are able to searchers at the University of reliably recognize them. We are more Heidelberg. Historians believe likely to identify one another through the Mona Lisa is a portrait of features that forensic anthropologists Gherardini, because da Vinci haven’t yet figured out how to dependreferred to the painting as la ably reconstruct: skin color, eye color, Gioconda. La Gioconda was hair, wrinkles, and characteristic facial the married name for Gherarexpressions.” In addition, the article dini, lending credence to this stated that Giocondo may not have theory. A team of archaeolosome teeth, therefore her mouth, the gists, led by Silvio Venceti, has most easily recognizable feature in the been trying to excavate her painting, would be unable to be reconremains from under the ruins structed. of an old Franciscan convent. Isotope analysis is yet another techThey unearthed, what they nique used in association with skeletal believe to be, del Giocondo, remains. The technique searches for reported in an article from Fox isotopes, forms of the same element News. While the remains have with different numbers of neutrons, not been confirmed yet, a seand attempts to link the amount of ries of tests will hopefully cor- The Mona Lisa’s face, pictured above, is crucial certain isotopes with a person’s lifefor the process of identifying her through methods roborate the archaeologists’ style. According to the Encyclopedia of like facial reconstruction. This image shows her claims. Food and Culture, the amount and type The Mona Lisa case will be face in great detail. of food that a person ate and the region examined and verified using many forensic techniques he/she is from can be deduced by analyzing trace isodealing with skeletal excavations. As opposed to the es- topes in his/her skeletal remains. For example, carbon-4 tablished study of forensic pathology, which determines indicates a diet of domesticated plants such as grains and the cause of death by analyzing a human corpse, there sugarcane while carbon-3 in the bones signals the conare three principal methods used in this type of forensic sumption of plants grown in temperate regions. Additioninvestigation, an inquiry that combines archaeologists, ally, nitrogen-15 is an indicator of protein coming from sculptors, and scientists. These methods include DNA animals. Isotopes can therefore be used to find the region matching, forensic facial reconstruction, and isotope someone is from and even identify social class, as higher analysis. classes tended to have more access to protein than lower There are two main goals of the Vinceti investigation. class people, resulting in a higher concentration of nitroFirstly, scientists will try to identify the remains of the gen-15 in wealthy individuals’ bones. body by matching the DNA of Giocondo to relatives, thus While the field of forensic pathology is rather well-esdetermining if the skeleton is in fact hers. If it is, then tablished, skeletal forensic investigations are new and they will try to recreate her face using a method known as evolving. These investigations continue to develop with forensic facial reconstruction. This face will be matched the potential to serve as a helpful tool for criminal investiup to that of the Mona Lisa, and since da Vinci’s paint- gations and research experiments like the search for the ings were anatomically correct, the reconstructed model Mona Lisa’s identity. of Giocondo should be very similar to the portrait’s sub-

36


RESEARCH:

PAIGE BURRIS Last summer, I was an intern in a breast cancer research lab at Memorial Sloan Kettering Cancer Center (MSKCC) in the Human Oncology and Pathogenesis Intern Program. Breast cancer is a type of cancer that originates in breast tissue and is the most common invasive cancer in women. Several cell lines have been developed to facilitate research on breast cancer. These cell lines are categorized into many different sub-types defined by the expression of different surface markers. Broadly, we can categorize breast cancer cell lines as being estrogen receptor positive (ER+) and estrogen receptor negative (ER-). Estrogen receptors are hormone receptors expressed by breast cells. Estrogen, a steroid based hormone, binds to active estrogen receptors leading to downstream signals that stimulates growth. Estrogen is necessary for normal development and growth of breasts but may play a role in causing a cell to become cancerous by over-stimulating growth. Drugs that interfere with the estrogen pathway may potentially be used as a treatment to hinder tumor growth. Tamoxifen is an estrogen analog that works by binding to estrogen receptors and competitively blocking estrogen from binding, thus preventing signaling. Fulvestrant is an estrogen receptor antagonist that causes the receptor to degrade. Understanding the effects of these drugs is important for future clinical application on patients with breast cancer. With that known, I conducted many proliferation assays and growth curves to investigate the dependence of estrogen on growth in various breast cancer cell lines and how drugs that interfere with the estrogen pathway, such as Tamoxifen and Fulvestrant, can be used to hinder tumor growth. It was fascinating to apply the concepts from my science courses to actual research. In addi-

tion to performing innovative experiments, I met regularly with my team to discuss my methods and findings, and attended lectures given by Principal Investigators. At the culmination of my internship, I presented my data at a formal poster session attended by MSKCC researchers and lab personnel. It is rewarding to know that my work has the potential to have a positive effect on peopleĂ­s lives. My research bridged my desire to help others with my passion for science. Using the skills I have acquired, I hope to continue to participate in scientific research.

37


RESEARCH:

ROYA MOUSSAPOUR

I decided to build an electric violin this year through the Independent Study program in order to bridge two interests of mine, physics and music. I’ve been playing violin since I was six and more recently have found a strong interest in physics, so I decided that combining the two would serve as a great topic for a project. I started out by studying the sound production of both acoustic and electric violins during first trimester. I began by studying the vibrations of strings; to do this, I set up a single string in the physics lab connected to a wave driver. By varying the frequency at which this string vibrated, I was able to see the way that strings produce harmonics. I then went on to compare four different “instruments” and their sound outputs. In the physics lab, I recorded my acoustic violin, an electric violin I already owned, a flute, and a tuning fork. I compared their wave patterns as well as their frequency spectrums to see which instruments had the most harmonics and in which ranges these harmonics were. Through this research, I was able to compile a strong foundation for understanding the sound production of a violin that would then help me in the construction of my own electric violin. Throughout second trimester, I worked on a combination of the construction of the violin and research into electrical pickups. I was very lucky that Mr. Laucharoen not only allowed me to use his woodshop for the construction, but also was willing

38

to help me at any point if I didn’t understand how to use a tool or create a specific piece. Within the basic construction, I had many decisions to make. The biggest decisions were which material to use for the instrument, what the violin would look like, and which pickups to use. I ended up choosing wood for the instrument, as the wood would help with the resonance of the instrument and would be easiest to use for the actual physical construction. I was able to get very creative with the design of the instrument because an electric violin’s sound is not as dependent on the shape of the instrument as it is on an acoustic violin. I originally wanted to use magnetic pickups with my violin, but I wasn’t able to find magnetic pickups that would fit on an electric violin (as they are usually used with guitars). At the beginning of third trimester, I finished the violin and began using it for tests in the physics lab. I’m now doing many cross tests with the finished electric violin, my other electric violin, and my acoustic violin. I’m looking to find differences in sound output that could be based on the material of the instrument, the strings on the instrument, or the method of amplification. Regardless of whether my research this trimester leads me to any interesting discoveries about electric violins, I’m so glad that I undertook this project, as it’s taught me so much about the construction, acoustics, and sound production of violins.


RESEARCH:

MICHAEL HERSCHORN Last year in AP Biology we discussed a research paper regarding artificially induced multicellularity, evolution, in microscopic yeast. I mentioned it offhandedly to my Physics teacher who immediately encouraged me to undertake a replication of the research. With his support, I set out to find the resources to begin such an experiment. I petitioned the school to acquire new equipment, supplies, and yeast samples while devising a unique project. My original research is to attempt to mate the yeast after they become multicellular. If the different mating types are no longer sexually compatible, then they are considered a new species. With my AP Biology teacher as a mentor, I began an experiment selecting larger and heavier yeast cells after mixing the liquid samples and letting the cells settle. By transferring small volumes of the samples from the bottom of the test tubes, I chose just the best sinkers. Through this process, I hope to cause multicellularity similar to that of the original experiment in which the yeast became snowflake-like growths of cells. I have continued this research through my senior year, and I am in the process of finishing. Next is time lapse videos of cell growth and a qualitative mating assay.

One of the most significant evolutionary adaptations to have occurred on earth was the development of multicellular organisms, a change allowing cell-type specialization and greater biological complexity. Recent research has shown that subjecting the unicellular eukaryote Saccharomyces cerevisiae to gravitational selection over a relatively small number of generations results in the evolution of multicellular clusters with apparently differentiated cells (Ratcliff et. al. 2011). An outstanding question is whether this represents evolution of a new species. According to the biological species concept, individuals that produce viable, fertile offspring are of the same species (Campbell, 2008). Saccharomyces cerevisiae is an ideal model for testing evolution of a new biological species, as it can reproduce either sexually or asexually through mating the two haploid types, “a” and “alpha.” My study aims to determine if, after gravitational selection, haploid multicellular organisms will evolve over many generations that are incapable of mating with haploid parent strains. Such a change would, according to the biological species concept,

constitutes a new species. To select for multicellularity, each unicellular yeast sample settled after brief vortexing. Small volumes from the bottom of each test tube were transferred to new test tubes, and this process was repeated. To test for a reproductive change, a qualitative mating assay in an adenine-tryptophan dropout medium of two genetically modified strains, HA1, adenine-requiring “a,” and HBT, tryptophan-requiring “alpha,” will be performed. If the mating assay shows growth, the cells will have been able to mate, as the two genotypes would complement each other. If not, the cells will have no longer been sexually reproductive and will therefore be a new species. At present, 89 rounds of selection transfers have been performed, and 40x optical images of the samples show signs of adaptation – the cells are larger and clumped together. Fully asexual descendents are evidence of the appearance of a new species. This result has strong implications for the simplicity in which new species can arise while increased genetic diversity can lead to novel characteristics and uses.

39


H O R A C E M A N N ’ S P R E M I E R S C I E N C E P U B L I C AT I O N • M A Y 2 0 1 3

40


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.