03
Sp Spectrum
H o r a c e M a n n’s P r e m i e r S c i e n c e P u b l i c a t i o n • A p r i l 2 0 1 2
Editor’s Note: Appreciating Technology
03
Sp
Spectrum
An artist laid out speakers in an auditorium, of all sizes and decibel levels. On top of the speakers he placed a thin platform to protect them from the experiment at hand. He then placed puddles of paint, in dozens of colors all along the platform. He connected an electric guitar to the speakers and let it roar, allowing sound waves to be generated which thus made the puddles of paint splatter viciously, as seen above. He then connected other music: rock n’ roll, classical, jazz, loud, soft, psychedelic, to experiment with the “speaker art” that came about.
Dear Spectrumites! Kings had taste testers who would taste their food before the kings themselves did, in case it had been poisoned. While most of us don’t have the luxury of having someone to taste our food, we all take many precautions when eating, as getting food poisoning is not out of the ordinary. E.coli bacteria, one of the worst food and water contaminants, can easily be found in our meals and has led to many mass food poisoning scares, including at Taco Bell and a recent national cantaloupe contamination. Wouldn’t it be great to scan your food before you eat it, preventing possible contamination from E.coli? Scientists at the University of California at Los Angeles, UCLA, have developed an attachment to your cellphone, which scans your food, and then uses fluorescence imaging to track for E.coli. In the past decade, technology has started to surround
us, leaving us hopeless without it. While many may think that we should take a step back from technology and try to appreciate a simpler life, it is also important to acknowledge the numerous ways in which it has changed society for the better. In this issue, you will see articles on crowdsourcing technology, which has been used to save lives in national disasters, on a bead that can be put into the body to track tumors in cancer patients and even the technology behind systems like the GPS and cell phones, which we use everyday. Stop and think about your life without technology. For me, it’s pretty hard to imagine. Appreciate how technology has influenced your life and think of how we can use it to help others. Ambika Acharya Editor-in-Chief
SPECTRUM
PA
G
E
8
PA
PA
G
G
E
E
14
20
PA
G
E
22
PA
G
E
PA
G
10
E
18
PA G E
35
PA G E
34
PA G E
16
Horace Mann
TOUCH SCREEN TECHNOLOGY
ROBOTICS
How does the screen on your iPhone work? By Deepti Raghavan PAGE 34
LEDS TO LIGHT UP THE WORLD
By Michael Herschorn PAGE 22
AUTONOMOUS CARS Imagine a world where cars drive by themselves. By Victor Wang PAGE 14
LCR: LOGIC CIRCUITS & ROBOTICS
A personal reflection on the school’s robotics course . By Yang Fei PAGE 18
ROBOTS TO FIND LIFE ON MARS
New robots are going to Mars to help us find life there. By Teddy Reiss PAGE 16
WHERE TO NEXT? THE TECHNOLOGY BEHIND GPS By Juliet Zuo PAGE 20
ACCELEROMETERS By Jay Moon PAGE 35
THE HIGGS BOSON
By Amit Chowdhury PAGE 10
NEUTRINO
It’s been in the news a lot recently, but what exactly is it? By Kundan Guha PAGE 8
Robots are the future of our society, whether we like it or not. Whether we use them to clean our houses, to navigate through space, and now to possibly drive cars, they are everywhere. Movies and the media often depict them in a dark light, eventually resulting in the breakdown of society, however robots simply make life easier for us.
Our Mission: To encourage students to find topics in science that interest them and move them to explore these sparks. We believe that science is exciting, interesting and an intergral part of our futures. By diving into science we can only come out more knolwedgable.
E
C
By RO PA Te W G dd D E y 6 Re SO iss U
RC IN G
Y
PH
R A
G
B LE PA y Sa C G m TR E G 30 in O sb M er Y g O
TU
ho u
G
IN
K
C
A
B M PA y A O G ma R E nd T 28 a Z R
N
IN
G
IN
R G
A
H
H
C
TE
B D PA y Ja UC G yP T E al I 33 ek V ar E C
IC
H
B EU PA y Ja R G me O E s M 26 Ap O fel R P
B
B IO PA y D M G eep ED E t 32 i R IC ag A ha L va E n N
A
IN
G
EE R
IN
G
U
B C PA y Jo CU G ann P E U 29 a C N ho C T
R
E
Horace Mann
SPECTRUM
Ambika Acharya
Editor-in-Chief
Tessa Bellone
Aramael Pena-Alcantara Production Director
Jay Moon
Junior Layout Editor
Olivia El-Sadr Davis Copy Editor
Justin Bleuel Michael Herschorn Jay Palekar Deepti Raghavan David Zask Juliet Zou Junior Editors
James Apfel Joanna Cho Amit Chowdhury Yang Fei Lauren Futter Sam Ginsberg Kundan Guha Mihika Kapoor Alex Kissilenko Teddy Reiss Victor Wang Amanda Zhou Staff Writers
Dr. Jeff Weitz Faculty Advisor
03
Sp Spectrum
Spectrum is a student publication. Its contents are the views and work of the students and do not necessarily represent those of the faculty or administration of the Horace Mann School. The Horace Mann School is not responsible for the accuracy and contents of Spectrum, and is not liable for any claims based on the contents or view expressed therein. The opinions represented are those of the writers and do not necessarily represent those of the editorial board. The editorial represents the opinion of the majority of the Editorial Board. All photos not credited are from creativecommons.org. All editorial decisions regarding grammar, content, and layout are made by the Editorial Board. All queries and complaints should be directed to the Editor-In-Chief. Please address these comments by e-mail, to hmspectrum@gmail.com. Spectrum recognizes an ethical responsibility to correct all its factual errors, large and small, promptly and in a prominent reserved space in the magazine. A complaint from any source should be relayed to a responsible editor and will be investigated quickly. If a correction is warranted, it will follow immediately.
By Teddy Reiss
Crowdsourcing Disaster response technology has been very useful in speeding the response times to the recent disasters in Japan and Haiti. To respond to disasters, organizations have created computer software, which takes the task of collecting data and outsources it to a crowd. This, among other things, has allowed maps of damaged areas to be electronically created in a short time. Crowdsourcing is a way of outsourcing a task to the general public. In Japan and Haiti, crowdsourcing is used in the collection of data. Some of this data is collected by survivors in the disaster area. Because this data can be collected by anyone, the more people that work on this task, the more data that can be collected. This data is then used for disaster mapping and in coordinating the relief effort. This information is then used to figure out which areas need what kind of help. Also, crowdsourcing allows for the collection of much more data than the agencies could gather by themselves. Last year crowdsourcing was used in Haiti after the earthquake. According to the United States Institute of Peace, the traditional system of responding to disasters was unable to use data from other sources, like the common people who were trapped. In other words, all the relief efforts had to be done in such a way that the people who were in trouble couldn’t accurately describe the conditions they were in. According to Oreilly
Radar, though there was massive damage to buildings, the cell phone network was mostly intact. A special text-message service known as a short code, with a number of 4636, was set up in Haiti so that individuals could text to get help. This shortcode was spread throughout the disaster area. Sending a text-message to the short code 4636 would give you an almost complete guarantee that your request for help would be recieved. Ushahidi is a Kenya-based company that creates crowdsourcing software to help in disasters. In Haiti, Ushahidi’s software was given information sent in text-messages to 4636. When the software got the messages, most of which were in Creole, they needed a translator so that the messages could be understood. If more information was needed to provide help, it was possible to reply to the person who sent the message and request the necessary information. The response teams could use information sent in the text-message to find coordinates that work with GPS units to get the exact location of the situa-
6 Horace Mann Spectrum ■ April 2012
Flickr
tion. Ushahidi’s software also generated maps that can be used by lots of organizations. According to Web Pro News, OpenStreetMap was another mapping team that quickly created an accurate map of the populated areas in Haiti after the earthquake. Similar technology has been used in other major disaster. According to NPR’s Health Blog, during the recent disaster in Japan, crowdsourcing has been used to help with the relief effort. The website RDTN.org advises people to use radiation detectors to measure the amount of radioactivity in an area
Wikipedia
Wikipedia
Haiti experienced one of the most tragic and devestating earthquakes in 2010, which killed close to 316,000 people. A year later, Japan experienced a devestating earthquake as well, leaving the country in pieces. Methods of crowdsourcing using technology proved useful in these disasters, allowing rescue teams to save more lives and deal with disasters faster.
and submit this data to their website, which puts it on a Google map. However, there could be a few problems for this to actually work the way it is meant to. First, radiation detectors need to be properly calibrated before they can be used to gather data. Also, there are natural sources of radioactivity, which can change readings. Radiation detectors can also become contaminated easily. However, despite the problems, if enough data is collected, radiation detectors have the potential to be useful in discovering where there are high amounts of radiation. This would become more common if it were cheaper and more reliable. Similar to RDTN.org, JapanStatus. org also helps to map radiation levels, except requires sources and verifies the information. Crowdsourced information is used by two major groups of people during disasters. The first group is made up of people affected by disasters, who then use their phones to report it. The other group is made up of the agencies that are part of the relief effort. They use this information to better target relief. Some of these agencies include “the Red Cross, Plan International, charity:water, U.S. State Department, International Medical Corps, AIDG, USAID, FEMA, U.S. Coast Guard Task Force, World Food Program, SOUTHCOM, OFDA and UNDP,� according to Oreilly Radar. These agencies use the crowdsourced data to find out which areas need specific help. They can also use it to find people and count how many are alive or dead. These crowdsourcing technologies have been very helpful in Japan and Haiti and will continue being used to help with future disasters. These disasters, while tragic, help us evaluate these technologies, so we can make them even more efficient in the future.
7
Neutrinos
I
n late September of 2011, the European Organization for Nuclear Research (CERN) published results of a test to measure neutrino oscillation and to measure neutrino velocity to a greater accuracy. Neutrinos are neutral subatomic particles that have very small mass, even for subatomic particles. Surprisingly, this test returned a remarkably different result, namely that neutrinos had been measured to travel exceeding the speed of light. This result violates a major law of physics called the theory of special relativity. Proposed by Einstein in 1905, the theory of special relativity is used, among other things, to derive massenergy equivalence, or E=mc2. The theory of special relativity establishes the speed of light in a vacuum as the maximum velocity of a massless particle. For an object with mass, special relativity imposes consequences for near light travel, namely time, length and mass distortions. For an object to pass the speed of light, its length would
Flickr
8 Horace Mann Spectrum ■ April 2012
By Kundan Guha elongate, its width would become negative and time would go backwards. All of these effects go against all logical reasoning and seem ridiculously impossible. The CERN researchers were not even looking to challenge the speed of light. Theywere instead interested in the properties of neutrinos. An almost massless particle that is electrically neutral, the neutrino is still a mystery to scientists and could hold many surprising things. The fact that the neutrino is an electrically neutral particle makes it not affected by forces except for the “weak” sub-atomic force, which is of a much shorter range when compared to the electromagnetic force. This property lets the neutrino pass through matter for incredibly long distances without being affected by it, a property that allowed researchers to conduct the experiment initially. CERN conducted the experiments by channelling neutrinos in a straight line towards the Laboratori Nazionali del Gran Sasso
(LNGS) located in Gran Sasso, Italy more than 730 kilometers away. However, when they used equipment designed to calculate the flight time of the neutrinos between the two research laboratories, the neutrinos were found to arrive at LNGS a bit faster than the time light would have taken to travel the same distance in a vacuum. Understandably this was quite shocking as we know nothing can accelerate past the speed of light. Even the researchers at CERN, who had discovered and subsequently tested it for months, were in doubt of the discovery, saying in the published results that there is most likely an error within it that they could not account for. Even more worrisome than this “speed limit” being broken, is the implications this event holds on physics if it is proven to be true. If a particle can accelerate faster than the speed of light then the theory of relativity would be contradicted,
and a good portion of the physics advancements discovered in the last century, which use the theory of relativity, would have to be reviewed and possibly even declared false. However, this doesn’t mean that physicists don’t welcome these results, or resent them. Many physicists are attempting to reproduce the results as well, for this also opens up many other possibilities for the future. In early February 2012, however, CERN found that a fiber optic cable, that was attached to the atomic clock used to measure times, may have been loose. Researchers are currently reconducting the experiment, ensuring that all wires are properly working. However, this stands as a testament to the very nature of science, a constant quest for knowledge forever adapting to the times, and in this case, adapting around a potentially negative outcome to form a positive one.
Flickr
9
The Higgs Boson
A
ll of the work that physicists have completed over the last century to explain the way our universe works is compiled into a single theory known as the Standard Model. The Standard Model explains the existence of twelve fundamental particles, which make all matter and which cannot be broken down any further. These fundamental particles are the building blocks for everything in the universe, and interact with each other through four fundamental interactions. The particles described in this theory make up everything we can think of, including atoms, protons, and neutrons. Similarly, the interactions in the Standard Model can explain all forces in the universe, such as friction, gravity, and magnetism. There are two types of matter particles, which are known as quarks and leptons. Each group contains six of the twelve fundamental
10 Horace Mann Spectrum ■ April 2012
Dobrochan
By Amit Chowdhury particles. According to CERN, each particle also belongs to one of three generations. The first generation has the lightest and most stable particles. The second and third generations are made up of the heavier and less stable particles. All particles from the second and third generations will decay rapidly into a particle from the first generation. Because of this, most of the matter in the universe is made up of particles from the first generation. The six quarks are “up” and “down” in the first generation, “charm and strange” in the second generation, and “top” and “bottom” in the third generation. The six leptons are the “electron” and “electron-neutrino” in the first generation, the “muon” and “muon-neutrino” in the second generation, and the “tau” and “tau-neutrino” in the third generation. The four fundamental interactions in the Standard Model are gravity, electromagnetism,
Flickr
strong, and weak. These interactions each have their own force carrier particle, a particle which represents the force. The interactions can be explained as the exchange of these force carrier particles between all objects. Gravity, a force we all feel every day, attracts objects together in relation to their mass. Electromagnetism makes particles with the same charge repel and particles with different charges attract, with the photon being its force carrier particle. Photons travel at the speed of light in a vacuum. The strong force is the force that holds the nucleus of atoms together, and the weak force causes the decay of second generation and third generation leptons and quarks. While the Standard Model has explained a lot about the way our universe works on a particle level, there are still many mysteries scientists do not have the answer to. For example, the force carrier particle responsible for gravity, called the graviton, has not yet been discovered. It has only been predicted to exist. But one of the mysteries scientists are closer to solving has been getting a lot of media attention lately: the Higgs Boson. The Higgs boson is a theoretical particle that, if it exists, will explain why particles have
mass. According to Berkeley Lab, the current theory is that there is a Higgs field that encompasses the entire universe. The more a particle interacts with the Higgs field, the heavier mass it will have. If a particle does not interact with the Higgs field at all, it will have no mass at all, such as the photon. It was only after the Big Bang that the Higgs field came into existence; the field then gave all particles that interacted with it mass through the Higgs boson. The Higgs boson has not been observed yet, but physicists have been looking for it for a long time. Very recently, in December of 2011, the European Organization for Nuclear Research, or CERN, released information about the status of the hunt for the Higgs boson. According to the BBC, CERN has been conducting tests for the Higgs boson at the Large Hadron Collider particle accelerator for a while. But in December, CERN saw consistent spikes in their mass data that may be explained by the Higgs boson. CERN asserted that by the end of 2012, theywill definitively know whether the Higgs boson exists or not. If it does, it would validate a lot of the Standard Model. But if not, scientists will have to come up with better theories for explaining mass in the universe. 11
Robotics robots are the future of our society, whether we like it or not. Whether we use them to clean our houses, to navigate through space, and now possibly drive our cars, they are everywhere. Movies and the media often depict them in a dark light, eventually resulting in the breakdown of society. However, robots simply make life easier for us.
Blogspot
Autonomous Cars
Imagine a world where cars drive themselves, no human needed. Scientists all over the country are designing this innovative car of the future. By Victor Wang Cars are such an enormous part of our lives tion of information from Google Street View and data that often we forget how dangerous they can be. gathered from an array of sensors. They utilize laser Anything from a missed turn signal to a momentary optical sensors atop the vehicle, radar sensors in the distraction can have disastrous consequences. It should front, position sensors in the back, and video cameras come as no surprise that the Centers for Disease within. The company’s seven test cars have driven a Control and Prevention report motor vehicle injuries total of 1,000 miles with no human intervention, and to be the leading cause of death for people under 30 in over 140,000 miles with minimal human control, inthe US. For a technology that forms such an integral cluding traversing San Francisco’s notoriously serpenpart of our society, automobiles are still frighteningly tine Lombard Street. primitive. However, one of the biggest challenges this The solution? Autonomous cars. An overtechnology faces is legal restriction. Currently, Newhelming majority of vada is the only state that automobile accidents are “An overwhelming majority allows fully autonomous caused by human error. to be driven on of automobile accidents are vehicles The obvious fix would be public roads, due to legislacaused by human error. The tion passed only after heavy to get rid of the human. Driverless cars have been obvious fix would be to get lobbying by Google. Nearly envisioned for decades, all traffic code is written rid of the human.” but it is not until recently with the assumption that a that they have become person is sitting behind the effective or realistic. wheel, and correcting this oversight could prove to be In 2004, the Defense Advanced Research Projno small feat. ects Agency hosted the DARPA Grand Challenge, a Still, autonomous vehicles could potentially competition with a one million dollar prize for whoprovide enormous benefits in addition to reduced ever could produce a vehicle able to independently accidents. A computer program can drive more efnavigate a 150 mile route through the Mojave Desert. ficiently than any human, reducing fuel consumption Despite the fifteen competitors, no team came close to and decreasing traffic. Cars could be summoned and claiming that prize, the most successful entry belongdismissed remotely, even given the ability to find parking to Carnegie Mellon University’s Red Team. Noneing on their own. And of course, robotic cars would theless, a year later DARPA hosted a similar competieliminate the need for an attentive driver, allowing the tion. This time, five teams finished the entire course, disabled or the distracted to travel in safety. first place going to the Stanford Racing Team. Their Autonomous vehicles are still years away from vehicle “Stanley” won the two million dollar award. common consumer use, but there is no doubt that in This new technology has attracted corporate the future, cars will be driven by machines and not attention. Search giant Google is developing its own people, and that one day, human-operated automobiles autonomous cars in a project headed by the director of will be seen as yet another hopelessly ignorant characStanford’s Artificial Intelligence Laboratory, Sebastian teristic of a generation long gone. Thrun. The vehicles navigate roads through a combina14 Horace Mann Spectrum ■ April 2012
Flickr
By Victor Wang
Robots to Find Life on Mars
N
ASA scientists hope to find the building blocks of life on Mars. Last November, The United States of America sent Curiosity, its largest rover ever, to Mars. Rovers are vehicular robots that perform a variety of tasks, often scientific ones. The name rover is reserved for those vehicles that travel to Mars in particular. While in Mars, the rovers analyze soil samples, take photographs, and report discoveries. The earliest American rover on Mars was the Sojourner, a part of the Pathfinder program in 1997. The program’s goal was to show an inexpensive method of bringing scientific equipment and a rover to Mars. It also had to “demonstrate that small rovers can actually operate on Mars,” and measure things such as the rover’s communication ability with Earth, which would be considered during the construction of future
By Teddy Reiss rovers. Sojourner was a 6-wheeled vehicle that could move up to 1.9 ft/min, a relatively slow speed, but fast enough for the rover’s needs. Its wheels were on a spring-less suspension system. Mars’ dangerous environment influence scientists to add a hazard-avoidance-system to keep it safe, one of the goals of the program. The program sent the rover with a few scientific instruments, including a stereoscopic camera and an APXS, which is an “Alpha Proton X-ray Spectrometer.” The APXS could be “used to analyze the components of the rocks and soil.” Some of the experiments it did included mapping parts of Mars from the surface and learning about the “compactness and density” of the soil composition. The rover had to communicate with Earth so that NASA could learn about its discoveries and so that NASA could control the rover. This communication was done through Ultra High
Flickr
16 Horace Mann Spectrum ■ April 2012
Frequences, or UHF. Landing on Mars was done with a parachute and an airbag system that enveloped the rover as it bounced on the surface of the planet. In 2003, as part of the Mars Exploration Rover program, NASA sent two more rovers, Spirit and Opportunity. Their objectives were to search for water, find geologic interests and analyze the contents of the area around the rover’s landing site, confirm data from the Mars Reconnaissance Orbiter, look for iron, and determine if the environment could potentially support life. In flight, they used the position of stars and the sun to determine their location while using small amounts of fuel to successfully make impact with Mars. Moving on Mars was also tricky because and o be safe, the operators would do tests on earth. Like Sojourner, these rovers carried scientific instruments including a “Panoramic Camera,” a “Mini-TES”, which looked into the composition of the soil and how it was formed, a “MB” to investigate iron, a “APXS” “for close-up analysis of the abundances of elements that make up rocks and solids, ” magnets, a “MI” for getting “close-up, high resolution images of rocks and soils” and a “RAT” that took away the upper “dusty and weathered” surface so the rover could get access to the inner surfaces. Like Sojourner, these rovers carried a UHF antenna. They communicated with orbitors, which relayed the signal back to earth. These rovers took a similar approach to Sojourner while landing. They used a parachute for some of the distance, then slowed themselves down by using retrorockets, rockets that fire in the opposite direction. Next, airbags inflated and the rover bounced on the surface. Between the rover’s airbags and the rover itself was a protective shell. Spirit and Opportunity were originally supposed to last for 90 Martian days but have lasted for several years longer. On November 26th 2011, America launched another rover to Mars. This rover, called Curiosity, has a few more objectives than the previous rovers had. Its objectives include figuring out whether Mars could have had life in the past, learning about the climate and geology, helping scientists make plans for a human mission
and looking to see if the building blocks of life are present in Mars. They also were sent to learn about how the ground had been formed, about what has happened to the atmosphere, and about the radiation on the surface. This rover was much larger, and according to NASA, “about the size of a small SUV.” Sojourner was “about the size of a child’s small wagon.” It also
NASA
has more cameras to help navigate and investigate and a long arm to get samples. At the end of the arm are a number of devices, including an APXS, a Mars Hand Lens Imager (MAHLI), and devices to get samples and prepare them for analysis. The Mars Science Laboratory rover Curiosity will take a different approach to landing on Mars. It will be protected by a heat shield and will start by entering the atmosphere. Next, it will launch a parachute to slow down and get rid of its heat shield. Eventually, the rover will leave its stowed location and prepare to be lowered. Tethered in between the parachute and the rover will be a platform with active propulsion pointed downward, allowing the rover to slow down and eventually touch the surface. The rover then will completely disconnect from the platform, which will then fly away. The Mars Science Laboratory also has a faster processor with more RAM. In addition, heat generated by the decay of plutonium powers it. The rover communicates with Earth through its two antennas and the earth based Deep Space Network, an international network of antennas that track and communicate with spacecrafts. Curiosity is planned to stay on Mars for about one martian year, about 687 earth days.
17
LCR:
Logic Circuits & Robotics
T
By Yang Fei
o embrace the world of technology, one must take that first, foundational step. In analyzing what makes up a computer, a robot, or a machine, there is no other choice but to start at the beginning and start small. This is where the Logic Circuits and Robotics course at Horace Mann comes into play. Working as both a best friend and a mentor, Ms. Smith will guide you from square one whether you are completely inept at technology or moderately well-informed. The first toy the class is given to play with is binary numbers, which are essentially the atoms of a computer. sense of accomplishment combined with a slight ego boost. There is no other feeling like it.
18 Horace Mann Spectrum â– April 2012
Once you are feeling slightly more prepared, you are thrown into the first major subject of the year: logic. Learning how to implement Boolean algebra pushes the class forward another step, and in the weeks to come you also become familiar with the use of gates and lines. Then, you’re finally going to get hands-on and transform the multitudes of diagrams you have drawn on paper and create an actual 3-D circuit. Each student is given a chunky box and a handful of wires and bulbs. (What is in fact on the board you are given is a computer that has been shrunken hundred of times and welded.) Using red and blue wires and hours of hard work, you will eventually learn to create a small light that can flash on and off. As if you’re not already immersed in the world of technology by this point, the required reading for this course, the novel Soul of New Technology by Tracy Kidder, brings you even deeper. In eloquent yet often blunt language, the author describes what it takes to become a programmer. Though it lists both the ups and
downs of the job, this non-fiction book still reads like a fairy tale. Kidder is wonderful at bringing the job to life through beautiful quotes, imperfect but memorable characters, and descriptions of the love and war involved in the pursuit of this profession. Be it the first period in the morning or the last period after a long day, this class ensures that you are with people you like, enjoying what you are doing. I personally enter my LCR class exhausted after all other my classes, but what we do doesn’t feel like work at all. Our small group of five intelligent minds gathers, joking, and we always have a great time. We squeal with glee when we get something right and through all the trial and error we surprisingly learning tons. Even though you may occasionally fail miserably or miss due dates, it is undoubtedly a joy to work with this technology regardless of whether you leave the room with something blown up in your face or a small light successfully turned on.
Photos courtesy of Janet Smith
19
Where to Next?
Flickr
The Technology Behind the GPS
T
oday, the Global Positioning System, or GPS, has become a common sight in our society, whether it is being used as a way to survive in the wild or to find your way to the nearest restaurant. But where exactly did it come from? After the launching of the Russian satellite Sputnik in 1957, American scientists figured out that they could track this satellite’s orbit by listening to changes in its radio frequency. After working off of this idea, eventually more than 10 satellites were launched to provide more accuracy. Simultaneously, two engineers, Ivan Getting and Bradford Parkinson, began a project to provide continuous navigation information, which led to the development of NAVSTAR GPS in 1973. This is the origin of
20 Horace Mann Spectrum ■ April 2012
By Juliet Zou the device we now call the GPS. In 1978, the U.S. military launched the first GPS satellite, and in 1995, the system was completed. According to a Time Magazine article, GPS uses a “constellation” of 24 2,000-pound satellites orbiting 12,000 miles high, which each circling the globe every 12 hours. Each satellite carries an atomic clock and broadcasts radio signals back down to Earth with information about their location and the exact time the signal was transmitted. By calculating the difference between radio signals received from four or more satellites, GPS receivers on the ground can determine their own location, speed, and elevation very accurately. Of course, today the GPS is most known for finding driving directions. In fact, accord-
DPL Surveillance Equipment
ing to the same Time article, civilian demand for GPS products surged in 2000, after the military ended its practice of intentionally fuzzing the satellite’s signals for security purposes. According to Harris Interactive, 17% of adults in the USA currently own or use a GPS location device or service. Today, a GPS function is also growing increasingly common in devices such as phones, wristwatches, and even dog collars., with the most widely used GPS devices on small handheld systems (34%) and portable car-mounted GPS systems (33%). It is predicted that the worldwide GPS market will total $75 billion by 2013. Additionally, GPSs are also used in freight hauling, the commercial fishing industry, meteorology, and geology. They have also played an important role in American military combat, having guided missiles and bombs to destinations in Iraq and Afghanistan, among
other places. The GPS has certainly come a long way since its initial development, and has aided us in all aspects of life. For example, those who seek adventure can play a game called geocaching, a satellite-based treasure hunt that has more than 800,000 caches waiting to be found, spread all across the globe. Scientists study earthquakes using GPS receivers placed along fault lines, and technicians synchronize computer networks for everything from power grids to financial networks using the satellite signals’ precise timing. As the technology of the GPS continues to get more and more precise, people are already predicting what use they can be put to next. Some possible future uses of the GPS include tracking, anti-theft devices, and monitoring children. And to think that all of this came from observing a Russian satellite. 21
LEDs To Light
A
By Michael Herschorn
light-emitting diode, LED, is an extremely practical device, now used in a variety of devices including TVs, cell phones and billboards. An LED is made of a chip of a semiconducting material contaminated with impurities to create a p-n junction. The diode is made of two halves, the p-side, anode, and the n-side, cathode. When a current is passed through the material, it moves from the anode to the cathode, but not in the opposite direction. Electrons and electron holes, the conceptual opposite of an electron, the lack of an electron where one could exist in an atom, flow into the junction from the semiconductor materials with different voltages. When an electron meets a hole, it falls into a lower energy state and the difference in energy is given off in the form of a photon, a packet of light. This reaction is called electroluminescence. Depending on the energy gap of the semiconductor material, the amount of energy required to free an outer shell electron, a different wavelength of light energy is given off; the energy gap determines the color of the light. Using different semiconducting materials, new colors of LED were produced. LED lights are not used often in the same capacity as incandescent and fluorescent lights are. Incandescent and fluorescent lights are often used in commercial and residential spaces to light rooms. LED’s could be used this way, but they are more expensive. LED’s do have some advantages though. They give off more Verschiedene
light per watt than incandescent lights, and their efficiency is not affected by the shape or size of the bulb whereas fluorescent lights and tube shaped lights are. LED’s do not use filters to give off different colored light. They can be far smaller than incandescent or fluorescent lights. They light up much faster, give off very little heat, have a longer lifespan than other lights, and are shock-resistant. Although LED’s have many advantages, there are also some disadvantages. Their efficiency is easily affected by changing temperatures, they require proper electric polarity whereas incandescent lights do not, their efficiency tends to decrease as the current is increased, and there is a concern that the light given off from blue LED’s and cool-white LED’s can be harmful, possibly leading to early onset of macular degeneration. Depending on the use, LED’s can be more effective in certain situations than other lights. LED’s have many uses. They are used in seven-segment displays, numerical display devices seen on most alarm clocks, televisions, radios, telephones, calculators, and watches. LED’s are also used in traffic lights, remote controls, and DVD players. The invention and development of a high power white light LED used for illumination has led to the increasing replacement of incandescent and fluorescent lights. Shuji Nakamura of Nichia Corporation demonstrated the first highbrightness blue LED, which led to the invention of white LED’s in 2006. Continued development and research has caused the efficiency of LED’s and their light
Up the World output to increase exponentially with a doubling every 36 months since the 1960’s. This trend is called Haitz’s Law after Dr. Roland Haitz. In 2008, 300 lumens of light were emitted using nanocrystals in the device. As development continues, new uses may be found. The light-emitting diode is a revolutionary device as it continues to change and enhance our way of life. Despite its practicality and revolutionary uses the LED comes from humble beginnings. In 1907, a British experimenter, H. J. Round of Marconi Labs, used a crystal of silicon carbide and a cat’s-whisker detector, an antique electronic component composed of a thin wire lightly touching a crystal of a semiconducting material, to create a crude contact-junction rectifier. A contact-junction rectifier is a device that converts alternating currents to direct currents. This apparatus was electroluminescent, the material emitted light in response to the passage of an electric current through it. In 1927, a Russian, Oleg Vladimirovich Losev, created the first LED. His research was distributed in scientific journals in Russia, Germany, and Britain, but it would be several decades before his invention would have any practical uses. In 1955, Ruben Braustein of RCA reported on infrared emissions from simple diode structures using gallium arsenide and other semiconductor alloys at room temperature and at 77 Kelvin. By 1961, Americans Robert Biard and Gary Pittman working at Texas Instruments found that GaAs emitted infrared radiation when an electric current was passed through it. They patented this device as the infrared
LED. One year later, Nick Holonyek Jr. at General Electric Company developed the first practical visible spectrum LED giving off red light. He is considered the “father of the light-emitting diode”. Several years later, a former graduate student of Holonyak, M. George Craford, invented the first yellow LED and improved the brightness of the red and red-orange LED’s by a factor of ten. In 1976, T. P. Pearsall created the first high-brightness, high-efficiency LED’s for optical fiber telecommunications by creating new semiconductor materials adapted to the wavelengths of optical fiber transmissions. Until 1968, visible spectrum and infrared LED’s were very expensive at around two hundred dollars per unit and there was little practical use for them. In 1968, the Monsanto Company mass-produced visible spectrum LED’s to be used as indicator lights. The Monsanto Company used gallium arsenide phosphate as the semiconducting material. Hewlett Packard began to use LED’s supplied by Monsanto in alphanumeric devices and in their early handheld calculators. In the 1970’s, Fairchild Optoelectronics began producing very cost effective LED’s at less than five cents apiece. Fairchild Optoelectronic’s use of compound semiconductor chips made using the planar process, a process where transistors are connected invented by Dr. Jean Hoerni, made LED’s inexpensive. The process by which Fairchild Optoelectronics made their LED’s is still used today.
23
NYNAS
Technology in Medicine Scientists have transformed the way we look at the human body, using technology in ways it has never been used before.
Whether its via robotic surgery, using modules to track tumors in the body or engineering radioactive therapy, technology has become an essential part of the medical world. So much so, that the new field of biomedical engineering has become widely popular, ensuring that the future of medicine holds a strong connection to technology.
Neuromorphic Technology By James Apfel
E
ngineers can learn a great deal from the human body. Refined by nature for millions of years through evolution, the processes of the human body are now being studied in depth by researchers to develop more efficient systems to modern problems. Take for instance energy usage; the human brains consume only 20-24W, one fifth of your bodies total energy. By comparison, Watson, a computer developed by IBM who defeated Ken Jennings and Brad Rutter in Jeopardy!, used more than 350kW. Although computers, thanks to larger memory and higher clock speeds, are more suited to certain tasks, like calculating movement in 3D space or multiplying large strings of numbers, humans are better at far more. Processing language, recognizing patterns, and vision, problems the human brain finds easy, are all examples of tasks that scientists have been working to no avail to implement in computers
26 Horace Mann Spectrum â– April 2012
Biochem
for decades. But it’s not just the brain, when it comes to sensing, nature’s designs are better than anything scientists have come up with on their own. For these reasons, neuromorphic enginereering has grown rapidly over the last couple of decades. Neuromorphic engineering, which means transforming to neurons, was coined by California Institute of Technology Professor Carver Mead and has already transformed many things. Many household goods that are in use today were developed through this methodology. For instance, the optical sensors in some of the best digital cameras, the touchpad built into a laptop, and the touch screens commonly used in smart phones. Current research, however, spans farther than just making household electronic devices. Researchers are working on innovative vision, auditory, and olfactory systems. Vision chips being built in the field of computer vision could
enable robots to not only see, but also to process their surroundings, or be used in the creation of implants that could enable the blind to see. Today’s cochlear implants are bulky and inefficient, yet research at MIT to mimic the human ear has already come close already produced prototypes that are both small and run for decades on a single battery. One of the major targets for neuromorphic engineers has been computing. While computers are very good at arithmetic, in all other senses they’re incapable and inflexible. Currently, artificial intelligence can only be achieved through the creation of specific algorithms, and when the context within which these algorithms operate is changed, they fail. The natural language processor, Siri, was only achieved through complex algorithm that took years to develop. In addition, there are distinct flaws with the architecture used by current computers. Von Neumann architecture, the schematic of almost all modern computers, entails that the processor and memory be separate. The memory stores both
the instructions and data, yet the processor can access only one at a time. The severe slow down caused by this phenomenon is called the Von Neumann bottleneck. To tackle this, researchers inspired by the human brain such as Dharmendra Modha at IBM, are now developing processors consisting of artificial neurons capable of forming new connections between each other, a process known as synaptic plasticity, which is key to learning. In this brain inspired architecture, processors and memory are integrated, and rather than working through a problem iteratively, computations are event driven. Activity on the part of a single neuron stimulates surrounding neurons until a solution is reached. Ultimately, the goal is to combine the two architectures to create an ideal computer, excellent at solving both multiple domains of problems. Yet, this research is still in it’s very early stages, chips developed by Modha’s team contains 256 neurons and 300,000 synapses, far less than the 100 billion neurons and 100 trillion synapses in the human brain.
Flickr
27
On the Hunt:
Doctors Use New Technology to Track Tumors
W
By Amanda Zhou
e’ve all heard of tumors and how dangerous they can be. A tumor is an irregular growth of body tissue. This body tissue may multiply and grow if it becomes dangerous. Tumors can be caused by problems in the immune system including too much alcohol consumption, tobacco, chemicals, toxins, genetic problems, obesity, radiation, viruses and excessive sunlight exposure. The main viruses that contain tumors are cancer and hepatitis B. The reason tumors are dangerous is because the cells multiply at an incredible rate. By the time a tumor may have reached a later stage and it is found, it sometimes might be too late to stop the tumor, since the multiplying cells may have already reached an important organ. The way to remove a tumor is to cut it out and uf an important organ is cut, then the tumor will become just a minor problem. There are symptoms to figure out whether you have a tumor or not. However, it depends on the location of the tumor. It is easier to find a tumor in some places rather than others. For example, if you
Popular Science
Above, the the tumor tracker is placed next to a penny to compare its size.
get a tumor that is visible in outward appearance, such as bone cancer in the foot, then it is more obvious that there is a bump and a potential tumor that should be checked out as soon as possible. Not every tumor is so noticeable, however. Pancreatic cancer, where a tumor is formed in the pancreas, is not visible using the human eye. Tumors are most likely identifiable when they begin to affect the person’s health. There are symptoms that appear in the later stages of tumors. These include chills, fatigue, fever, lack of appetite, sweating during the night, and strange weight loss. A new invention, which has been researched and engineered for a long time now, is hoping to get find tumors during the early stages. According to Popular Science, this implantable tumor tracker, made up of nanoparticles and antibodies, is the size of a small piece of candy. The antibodies in the device, which is insterted into a person’s bloodstream, bind to specific molecules in the blood that have been created by a tumor. This allows doctors or scientists to observe if there are any problems, such as tumors that may be receding or increasing in size. In addition to those, it is also possible to use the tumor tracker to detect if there are early stages in defects, such as enlargement. So far, these tumor trackers are only available to check certain types of cancer, including prostate cancer where a tumor tracker was made in a lab in Washington. This is to check whether there are or are not any existing tumors around the male reproductive system. The main reason these tumor trackers are produced is to minimize the amount of time needed to find tumors and defects in the human body. Once found, the tumor cells are removed by surgery, radiation or chemotherapy. The tumor tracker is the future of medecine and can potentially be used to saved people from these shortcomings before it is too late.
Accupuncture By Joanna Cho
A
cupuncture. Just the sound of the word creeps some people out. All they can think of is needles and pain, needles and pain, needles and pain. But there’s more to acupuncture than needles and pain. Otherwise, how could needles and pain relieve someone from stress and pain? According to Web MD, acupuncture is “pain management therapy.” Acupuncture was first used by the Chinese, who thought that balanced energy flow was a sign of good health. In acupuncture, doctors insert thin needles at certain points, called acupoints, which stimulate energy flow throughout the body. Acupuncture also causes endorphins, protein molecules that regulate continuous pain and stress, to be produced by the cells of the nervous system, therefore relieving the patient from stress and pain. (There are about 2,000 acupoints on your body.)
Flickr
While the insertion of thin needles at acupoints sounds painful, it really isn’t. The insertion merely feels like a tiny pointed tap on one’s arm by a fingernail. Moreover, acupuncture by a certified acupuncturist is safe. It relieves all sorts of pain from patients who have various conditions. Acupuncture is found to effectively treat chronic symptoms or illnesses, postsurgery pain, headaches and tennis elbows. The therapy also helps stroke patients with rehabilitation. The therapy has also been used to treat patients with more serious pains includingFibromyalgia, pain of muscle and soft tissue, Myofascial pain, pain from muscle spasm, Osteoarthritis, wearing out of joint-protecting cartilage, and Carpal tunnel syndrome. Therefore, acupuncture isn’t as bad as one would think. It’s not painful, it gets rid of pain and stress, and it effectively treats the side effects of diseases or harmul health conditions.
29
Electromyography
By Sam Ginsberg
T
echnology is invented to make life easier and simpler. Instead of getting up to turn dials on a TV set to find the right channel, we invented the remote control to make it more convenient. Instead of having to struggle through months on an open sea to travel to other countries, we invented the airplane. However, as technology progresses, more components are added, making handling that technology much more confusing and complex, driving technology away from its genuine purpose.
Flickr
30 Horace Mann Spectrum â– April 2012
Steve Jobs and Steve Wozniak wanted to build their products as extensions to the mind, so that using them would be instinctive. Motion gestures proved just that, where flipping an actual page in an iPad was as simple as swiping the actual device, giving the user an instant attraction to its simplicity and familiarity. However, the products they composed still required many components, again taking the simplistic idea further away from its goal. Electromyography (EMG), is the future in human-to-machine interfaces, and is the study of muscle movement in the human body. Electrodes placed directly on the skin are able to detect the vibrations emitted from muscles as they use energy and these vibrations are analyzed. So far, there is only limited research in the subject. Most of EMG is set in medical science, where they can diagnose diseases in the muscle by discovering eccentric muscle movements. Microsoft is also studying this technology now. A division of their research deals with MuscleComputer Interfaces, in which they have built an armband that can read the movements of any hand or finger movement. They were able to use it in many ways, for instance while playing Guitar Hero, only they played it without a guitar. Imagine the possibilities developing this technology would create. Improving human-computer interfaces would almost entirely eliminate the need for elaborate mechanisms, and would speed up frustrating and tedious processes. The problem with this is that mass-production of this technology is most likely further into the future than sending a man to Mars. The electronics associated with this technology are extremely complex, delicate, and expensive. It uses special electrodes placed on the skin that literally hear muscle movements. As muscles contract, they produce vibrations, which are also sound. Electrodes are tiny sensors that pick up these vibrations and interpret them
as muscle contractions. This technology is obviously out of reach to most, and introducing it to the popular world at this point would do nothing. This technology is not impossible to introduce, but it would a lot of time and effort. At this point in its life, EMG has only been produced as a medical device that will only read the intensity of muscle contractions. To begin using it as a day-to day device, software must be manipulated to serve desired purposes. If you wanted to use EMG to control your television, you’d need an armband with electrodes, but you’d also need the software to interpret the messages that the electrodes are picking up. A software program that turns chart patterns into specific commands would need to be developed. For instance, moving your middle finger on your right hand would be initially interpreted as a pattern: it senses specific readings being picked up by the electrodes, but to make it adjust volume you would need a software program to recognize that pattern and send a corresponding command to the actual television. I believe that if we elaborate on this aspect of technology, we can change the fundamental aspects of our daily life. Companies like IBM or Microsoft could adapt this technology into easy interfaces and eventually implant it into our daily lives. We have created the technology to develop one of the easiest possible interfaces in electronics, but we are yet unable to introduce it into the modern public electronic market. Software used to translate muscle movements into commands is only being developed in private research, and the equipment used to read these movements is very expensive. Although this research is very small scale and very far from mass production, electromyography might be the future of electronics.
31
Biomedical Engineering By Deepti Raghavan
M
edicine and engineering used to be two completely distinct fields. Engineering uses skills from math and physics to solve technical problems, while medicine seeks to cure mankind of diseases. However, there is a meeting point that joins the two fields. Biomedical engineering combines the two fields to help tackle problems which have roots in both areas. According to The Biomedical Engineering Society Website, a “biomedical engineer uses traditional engineering expertise to analyze and solve problems in biology and medicine, providing an overall enhancement of healthcare.” The field has many aspects, including using technical skills to solve problems in medicine, and designing the machines that doctors work with. Some of the well-known areas now include bioinstrumentation, which creates the machines that doctors can use to treat diseases. Biomaterials is the study of the selection
“For a person who loves biology and physics, biomedical engineering is a perfect fit.”
of appropriate materials to use for transplant surgery. Biomechanics uses Newtonian mechanics and ideas to study the processes in the human body from a new angle, including observing motion within the body. Genetic, cellular, and tissue engineers study medical problems in the human body at a microscopic level. For example, this aspect of the field has led to “miniature devices [that] deliver compounds that can stimulate or inhibit cellular processes at precise target locations to promote healing or inhibit disease formation and progression.” Though there are many more parts to the field, the biomedical engineers mainly use their skills in design to create devices that are essential to health care today. There is also a part of biomedical engineering that deals with software to organize the healthcare system. Universities offer degrees in biomedical engineering at the undergraduate and graduate level. Biomedical engineers, because of the versatility of their skills, can be employed by many places. According to “What Is Biomedical Engineering,” some important achievements from the field include “prosthesis made of biomedical components, clinical equipments like micro – implants, magnetic resonant imaging or MRI and regeneration of tissue.”
Flickr
32 Horace Mann Spectrum ■ April 2012
Inductive Charging: A Wireless Future?
Yale Sustainability
T
he defining characteristic of modern day technology is the slow but steady move away from wiring towards more portable devices. Everyday people cut the cords, moving from desktops to laptops, and from laptops towards mobile phones and tablets, yet each of these is still intrinsically disabled by the thirst for energy. While battery technology keeps on improving, holding more power in less space, with reasonably short charging times, batteries themselves won’t free us from the need to charge our devices. In short, we’re all bound to the power outlets that line our wall, and that our grandparents used to provide their power. That might be changing however due to new means of over-the-air charging. Unlike conventional “wire-cutting” technologies, over-the-air charging is fundamentally different, it’s not just transferring a message, but the energy to do quadrillion of operations on that message. Currently, there is two major ways with which the technology is being explored, the second being based off the first. One is to take advantage of an object’s resonance, it’s vibration when hit by waves of a certain kind. The second being to transfer the energy as radio waves and have a receiver then change that into DC current. Both means however suffer from the similar problems of low efficiency, incapacity for large-scale transfers, and lack of feasibility due to the huge parts necessary for their function. The first method, MIT’s resonance method, takes advantage of one of the most oft seen phenomena in physics, the tendency of an object to oscillate at a higher frequency at certain amplitudes than another. Imagine the proverbial fatlady singing and a shatterproof glass about 7 feet away, when the fat lady sings certain notes, the glass essentially starts to
By Jay Palekar vibrate, after a certain point however no matter how much higher the notes get, the glass will no longer vibrate as fast as it had at its peak. In essence, the resonance method is utilizing this concept and adapting it to the charging. On one side of the room there’s a large transmitter creating large electromagnetic and on the other side there’s a large receiver vibrating due to these waves, and transferring those waves into current. The method however suffers from inefficiency, picking up only 40% of the energy is transmits, not to mention being made up of parts that range up to 2 feet in diameter. The other method, pioneered by tech start-up Powercast, attempts to work in a similar method, except it’s main form of energy transfer is radio waves, a specific type of electromagnetic wave. Powercast has patented a receiver which allows it to harvest far more power from the radio waves than a general antenna, but that is still smaller than a human hand. Along with over-the-air charging, there has been much research into inductive charging. Inductive charging is commonly used today for a variety of purposes including powering stovetops and charging toothbrushes. Yet, more recently it has found it’s way into mobile phone technology. Inductive charging requires the two objects to be close to each other and can be scaled up to much higher levels than over-the-air technologies, but still suffers from efficiency concerns. The concept is basic; you have one coil through which you run an electric current creating a magnetic field. That magnetic field then runs through a secondary coil from which the field turns back into electricity. Cell phone brand Palm has already implemented this technology in its phones and more recently several other companies such as LG have begun experimenting with this technology.
33
Touch Screen Technology By Deepti Raghavan
H
ow exactly does your iPhone respond to your fingertips? Today, touch screen phones, which evolved from Sam Hurst’s “Elograph,” are becoming increasingly popular. In 1971, Hurst was an instructor at the University of Kentucky Research Foundation, and due to a high volume of research papers he had to read during the graduation exam period invented the “Elograph”, the first sensor that responded to touch, to help him input data faster. Unlike today’s touch screens, it was not transparent, but was the beginning. Shortly after, Dr. Hurst founded Elograpics, which is today known as Elo TouchSystems. In 1974, Hurst released a transparent touch screen and in 1977, the company created the 5-wire resistive technical method, currently the most wide-spread touch screen technology. Today, there are three widely-used systems in phones and other touch screen devices: the resistive , the capacitive, and the surface acoustic wave systems. Each of which uses a different method to recognize input. In general, the resistive system has three layers: a conductive layer, a resistive layer made out of metal, and a scratch resistant, all of which are placed over a normal glass panel. Between the conductive and metallic layer, there is an electrical current which when placed under slight pressure “touch” (make contact) disrupting the current. At that point, the computer then calculates the co-ordinates through a special type
34 Horace Mann Spectrum ■ April 2012
of driver which converts the user’s touch into information that the operating system will make sense of. In the second type, the capacitive system, there is a glass panel with an electric charge layer over it; when the user touches the screen, some of the charge is transferred to user, causing a decrease in charge to four circuits located in the corners of the device’s monitor. By calculating the changes of charge in each corner, the driver software is able to figure out where the user touched the screen and give that info to the operating system. Capacitive touch screens also have the advantage of having a clearer display and higher accuracy. The third system, the surface wave acoustic system, is made up of two transducers, one receiving and one sending located at the X and Y axes of the device’s glass panel along with and reflectors which reflect the electrical signals amongst the transducers. When the screen has been touched, the receiving transducer is immediately able to recognize a disruption and locate it accordingly. Touch screen technology is becoming increasingly popular today, according to venturebeat.com, 15.8% of the phones in 2008 were touch screen, including a number of popular smart phones such as the iPhone, many Android Phones, and the Palm Pre. Along with this, many other electronic devices including some nooks, the iPad, and iPod are harnessing the power of these technologies to respond to your fingertips! Flickr
Accelerometer By Jay Moon
I
n just the last few years, you may have noticed a change in the way people interact with their portable technology. Touching, swiping, tilting, shaking. In a pre-iPhone world, one could distinguish Smartphones by their rows of keys and buttons. Along with the gradual trend towards monolithic hardware designs, fewer buttons, and prominence of the display, a new method of communicating with devices has emerged and shaken up the Smartphone world: accelerometers. An accelerometer is the mechanism by which a phone detects device orientation and movement. The accelerometer allows, almost magically, a smartphone to adjust its user interface from portrait to landscape. App developers have taken advantage of this feature, now standard in almost every modern touchscreen smartphone, to make new styles of games that use an unprecedented method of gameplay that utilizes the motion and orientation of the device as the primary means of control. No longer do portable gaming consoles need d-pads or a plentitude of buttons. An arguably more engaging, more active means of entertainment than a Nintendo DS or Sony PSP sits snugly in your pocket - and it happens to be your phone. How do accelerometers work? The most basic accelerometers measure acceleration forces, which may either be a static force, such as when gravity pulls at your feet, or a dynamic force, such as when the accelerometer moves or vibrates. They measure acceleration forces in relation to a free-fall inertial reference plane using a mechanism that is similar to a damped mass on a spring attached to an outer casing. The spring accelerates along with the device, while the mass lags behind, stretching the spring and measuring the displacement of the mass as the acceleration.
Of course, there isn’t a spring in your phone, just a chip that replicates the mechanism at a microscopic scale, converting it into an electrical signal that can then be used by software to affect what you see on your screen. Accelerometers that output a digital measurement can also be made in a number of different ways, the most common of which uses piezoelectricity, which measures the stresses on tiny crystal structures to generate a signal. Multiple accelerometers can measure the change of acceleration in multiple axes, resulting in for a finer degree of measurement and wider capabilities of the final software. Accelerometers have had many other useful applications in consumer electronics. Laptop manufacturers have accelerometers that measure when a device is falling. The accelerometers shut off the hard drive to prevent the head from scratching the hard drive platter. Cars use accelerometers in the event of a crash; here they take measures to minimize impact when it detects an imminent collision. Zone Accelerometers are not a recent discovery. They have been used in spacecraft prior to its more common uses today. But with the help of ingenuity, people are finding new innovative applications for this rather elementary mechanism, whether it’s as simple as detecting device orientation or as complex as controlling a racing game, changing the way we interact with our devices in this modern age. Tech2Date
35
H o r a c e M a n n’s P r e m i e r Science Publication Apri l 2012