XeVeX Fall 2018

Page 1

Volume X Issue 1

Fall 2018

XeVeX Faculty Adviser Dr. Fabio Nironi

Editors David Adler Zach Buller Aliza Freilich Josephine Schizer

Contributors Elan Agus Talia Halaas Caitlin Levine Rebecca Massel Sophia Rein Akiva Shlomovich


The Utility Function David Adler People have been trading goods for millennia, leading to the question of how much money a merchant should charge for his products. For example, if a man owns a butcher shop, how should he decide the cost per pound of meat? If he charges too little, he will lose money. However, if he charges too much, customers won’t buy from him anymore. Thus, merchants have always tried to find this “golden zone” of pricing where they aren’t charging too much or too little. However, to this day, it is impossible to determine exactly what this “golden” price is. Economists refer to the calculation of this “golden zone” as the utility function, and even today, there is no set way to find this value. The golden price is directly correlated to the happiness or satisfaction of consumers with a certain product, a highly abstract concept, which makes it extremely hard to calculate. However, there are certain rules that apply when talking about utility. First, there is the idea that utility grows as the supply of available product increases, until there is too much, at which point utility starts decreasing. Imagine, for example, that someone gives you a pie of pizza. After you eat the first slice, you are somewhat satisfied, and after the second slice most people would be even more satisfied. However, once you get to the fifth slice, most people don’t want any more pizza – they’d be too full. At the beginning, more pizza would make you more satisfied, but as you have more and more pizza, your desire for pizza decreases. Secondly, when purchasing a product, a consumer wants to make sure that each dollar they spend will give them at least as much happiness as the last dollar they spent. Take the pizza pie example above – when you buy a pizza pie at the store, the cost per slice is lower than the cost of buying individual slices. This is because as you have more pizza, each additional slice will cause you less happiness, so you will not be willing to spend as much money on that slice as you did on the previous one.

In summary, the utility function is a measure of how much happiness a product will supply to the consumer. This happiness directly correlates with how much money the customer is willing to spend on the product. This plays into our daily lives anytime we buy a greater quantity of something at a lower price – called “wholesale” – and into sellers’ lives when they determine how much to charge for their goods.

The Riemann Hypothesis Elan Agus The Riemann Hypothesis, conjectured by Bernhard Riemann, in 1859, is difficult to understand and much more difficult to even start to think about proving. It states: “All non-trivial zeroes of the zeta function have real part onehalf.” First of all, we must explain what the zeta function is. It can be demonstrated mathematically as follows:

In words, the zeta function is the sum of the reciprocals of all the natural numbers to a given power. There are analytic continuations of this summation which can extend the Zeta function to the imaginary world and to the negative even numbers. There are two types of zeroes, for this continuation: the trivial zeroes, and the non-trivial zeroes. The trivial zeroes are all of the negative even integers. One may use calculus and analyze alternative ways to compute the sum of the continuation of the zeta function and would then realize that the sum is zero. However, these are not the zeroes with which Riemann was concerned. Riemann was dealing with the non-trivial zeroes. These zeroes have complex (imaginary) parts to them. One can ask the question: how do you raise numbers to complex powers? This is answered by Euler’s famous formula: , where , and e equals the transcendental √ number which is roughly equal to 2.718.


The zeta function has certain complex zeroes. This means that they are of the form a+bi. The “s” in that infinite summation would be this value of a+bi. There are infinitely many of these zeroes but, according to the Riemann Hypothesis, they all have a very specific form. It can be shown that all non-trivial zeroes have a real part- the “a” in a+bi form between 0 and 1. The Riemann Hypothesis says that all of the zeroes of the zeta function are of the form 0.5 + bi. This is to say that there exists no zero of the zeta function which does not have the real part of one-half. This conjecture is what mathematicians have been trying to prove for years. Right now, there is a one-million-dollar prize offered by the Clay Mathematics Institute for anyone who proves this conjecture. What is so significant about this function and its zeroes? The zeta function is tied to the distribution of the prime numbers and relative to other equations that find the distribution of prime natural number, the zeta function and the Riemann Hypothesis better show this distribution. Sources: Maurice, and Williams. “Prime Obsession.” AuthorsDen.com, Publish America, 1 Jan. 1970, www.authorsden.com/visit/viewwork.asp?id=70 046

The Fourth Dimension Zachary Buller The concept of the fourth dimension being a mathematical extension of the third dimension has puzzled mathematicians for hundreds of years, with 19th century thinkers including Möbius and Schläfli speculating its meaning and applications. It was only until recent discoveries and understandings that mathematicians have a better sense of what comprises the fourth dimension and how one can even begin to imagine a world where this abstract idea may be a reality. The fourth dimension can be understood to be a mathematical extension of the third dimension. Given that we live in a three-

dimensional world, the easiest way to understand this perplexing concept is to analyze the relationship between the lower dimensions and projections of images onto figures in these dimensions. If a particle were in a 2D universe, and it was “looking” at another 2D shape in that universe, the only part of the figure that it would “see” would be a straight line, i.e. onedimensional space. The “shadow” cast on twodimensional figures in a two-dimensional space is linear, or one-dimensional. Similarly, a particle viewing a three-dimensional object in a three-dimensional space would only see it in two-dimensions. The shadow cast upon this three-dimensional figure would be twodimensional.

Figure 1: One-dimensional shadow on a two-dimensional shape.

Figure 2: Two-dimensional shadow on a three-dimensional shape.

This concept can be applied to the idea of the fourth dimension. If a four-dimensional object were to be viewed by a particle in a fourdimensional world, it would see it as threedimensional and the shadow cast upon it would be three dimensional. Mathematicians have called this three-dimensional shadow the


tesseract. When rotated, it is considered one of the polytopes; i.e., shapes only observable in the fourth dimension. Using a myriad of formulas and theorems, mathematicians have reproduced the tesseract and other four-dimensional polytopes to the best of their ability in a limiting three-dimensional space. For now, mathematicians have a limited understanding of the bewildering four-dimensional world. But with time, we will surely develop new theories and speculations as to the applications of this mystifying phenomenon.

combinatorics and by the Greeks in their study of figurate numbers. In order to build the triangle, start with the number “1” at the top, and then continue placing the preceding numbers below it in a triangular pattern. In order to fill in the triangle, know that each number is the sum of the numbers directly above it.

Figure 4: Pascal's Triangle

Figure 3: Tesseract

Sources: “Tesseract.” Wikipedia, Wikimedia Foundation, 3 Nov. 2018, en.wikipedia.org/wiki/Tesseract. Elf, The Science. “A Beginner's Guide to the Fourth Dimension.” YouTube, YouTube, 30 June 2016, www.youtube.com/watch?v=jixGKZlLVc <%22http://>.

Pascal’s Triangle Aliza Freilich In mathematics, Pascal’s Triangle is one of the most interesting number patterns. In short, Pascal’s Triangle, named for the French mathematician and philosopher Blaise Pascal, is a triangular organization of the binomial coefficients. Though Pascal’s Triangle is named for the French mathematician, this triangle has been used by the Indians in their discussion of

When looking at the triangle, notice that the first diagonal (the left-most) is made up of just “1”s. The next diagonal is made up of the “counting numbers”, and the third diagonal is made up of the “triangular numbers, “ which are any of the series of numbers (1, 3, 6, 10, 15, etc.) obtained by continued summation of the natural numbers 1, 2, 3, 4, 5, etc. The fourth diagonal has the “tetrahedral numbers,” which are numbers that represents a pyramid with a triangular base and three sides. Some tetrahedral numbers are 1, 4, 10, 20, 35, 56, 84, 120, 165, 220. Pascal’s triangle is also symmetrical, meaning that if one were to slice the triangle in half, each side would be mirror-image. As shown in the photo above, the horizontal sums (the sum of each horizontal “line” of the triangle) double each time as you move your way down the triangle and are powers of “2”. Additionally, each line is also a power of “11”.


Figure 5: Powers of 11 in Pascal's Triangle.

Pascal’s Triangle can also be used in ways to get the Fibonacci sequence by making a pattern within the triangle. Also, by highlighting the odd numbers in one color and the even colors in another, the pattern that will show is the Sierpinski Triangle. Though there are many interesting math patterns that come out of Pascal’s Triangle, Pascal’s Triangle can also be used in the probability of combinations, such as flipping a coin. Pascal’s Triangle can show you how many ways “heads” and “tails” of a coin can combine when flipping a coin. For example, if you toss a coin two times, there is only one combination that you toss “heads” two times in a row (HH), there is a combination you toss “heads” and then “tails” (HT), there is a combination you toss “tails” and then “heads” (TH), and there is a combination where you toss “tails” two times in a row (TT). This is the pattern for the second row of Pascal’s Triangle, “1, 2, 1”. Pascal’s triangle can also be used in determining the amount of combinations possible for an object. If you have 16 pool balls, for example, in how many different ways can you choose three of them? In order to solve this question, you would use Pascal’s Triangle and look at the 16 row (the top row is 0). Once you reach the 16 row you move along three places and the value, “560”, is your answer. th

th

Sources: Correlation, www.mathsisfun.com/pascals-triangle.html.

Reviewed: “The Wrong Way to Teach Math” by Andrew Hacker Talia Halaas Andrew Hacker’s opinion piece in the New York Times explains how humans live in a society that has failed to teach the math skills necessary for everyday life. He begins by writing that, according to a national survey, while most Americans have taken high school mathematics, 82% of adults could not compute the cost of a carpet when told its dimensions and price per square yard. How is it that our mathteaching skills are so poor? The answer is not more math, rather it is what the Mathematical Association of America calls “quantitative literacy”. It seems that this kind of proficiency is already being covered in statistics courses so Hacker went to investigate these classes on his own. Hacker explains that he sat in on several advanced placement classes in Michigan and New York where he thought they would focus on what could be called “citizen statistics”. He expected that the course would touch upon topics applicable to the students’ personal lives, including figures cited on income distribution, climate change, or whether cell phones can damage your brain. However, he explains that his expectations were far from the reality. The AP syllabus is what he describes as a “research seminar for dissertation candidates”. Many students fall by the wayside not only because of the difficulty of the classes but also because they can’t see how the formulas can be applied in their own lives. Hacker takes a different approach to teaching his students in his Numeracy 101 class at Queens College. One exercise he engages in with his students focuses on visualizing data. His students had to focus on how many homes in Connecticut and Arkansas have telephones (land and cell): 98.9% for Connecticut and 94.6% for Arkansas. Then, they were instructed to pick one of these charts to represent the numbers and defend their choice of chart. The differences in the chart highlighted situations that one may encounter in the real world. The first chart gives the impression that the Connecticut numbers are much bigger than those of Arkansas, but, the data is misleading


due to the fact that the bars are scaled to exaggerate the difference. It is evident that this lesson is very important in daily life.

Figure 6: Visualizing Data

A second activity involves discerning and analyzing trends. Each January the National Center for Health Statistics releases “Births: Final Data”. In it its rates and ratios range from the ages of parents to methods of delivery. The students were asked to look for patterns and they found, for example, women in Nebraska are averaging 2.2 children while in Vermont it is 1.6. Similarly, Hacker also has his students analyze tables that focus on changes over time. Finally, Hacker and his students talk about how math can help reorganize the world and how they live their lives in ways that make more sense. For example, the class debated how our weeks should be measured. Imagine we had a ten-day week, each day with 10 hours, should there be a three-day weekend or put an “offday” in the middle of the week? This is just one example of the plethora of topics Hacker brings to his class to debate using math. Hacker closes his piece with “the law of the excluded middle.” This phrase can be applied to the problem in our math curriculum. Arithmetic is taught quite well in grade school, but once students are taught geometry, algebra and eventually calculus, their proficiency in basic mathematics begins to decline. Some students handle this progression well, but according to Hacker, far too many are left

behind struggling. Thus, the assumption that all this math will make us more numerically adept is incorrect, as “it often turns out that all those X’s and Y’s can inhibit becoming deft with everyday digits.”

The Evolution of Baseball Statistics Caitlin Levine Analyzing baseball statistics has changed dramatically over the last 20-25 years. The reevaluation of a player’s potential, value, and production was brought mainstream by the publication of Moneyball: The Art of Winning an Unfair Game by Michael Lewis. This book chronicles the rise of the Oakland Athletics baseball team under the then general-manager Billy Beane in 2002-2003 when a small market team with a low budget was able to overcome and outperform large market teams by analyzing unique statistics. Referred to as sabermetrics, analyzing players through complex scrutiny has changed the approach of many talent evaluators and general managers with great success. Even large market teams with high payrolls have adopted elements of this approach to this day. Prior to this time period, analyzing pitchers and hitters was straightforward for scouts. If a hitter hit home runs, drove in RBI’s and had a high batting average, they could compete for the Triple Crown, a title that virtually everyone at that time felt was the goal of any hitter. Similarly, pitchers were deemed successful based on wins, ERA, and strikeouts. The value of a player to a team was based on an individual’s statistics and did not account for a player doing the little things that helped the entire team win. The statistics also did not account for luck. For example, if a pitcher played for a team with great hitters and won games - despite not pitching particularly well he would be rated well according to the old baseball ratings system. But most people would not want to sign such a player, since they may play worse the following year if they do not have as much run support. These dilemmas and so many more led to the renaissance of sabermetrics. Many details have been added to player evaluation, a few of


which will be discussed below. One example is the OPS. The OPS is the on base plus slugging. The on base percentage investigates how often a player reaches base per at bat. This statistic places a similar value to getting a base by a hit, a walk or even getting hit by a pitch. The simple act of getting on base is what matters. This statistic is coupled with the slugging percentage. The slugging percentage analyzes power and consistency, looking at how many bases a player gets per bat (not per hit). Coupling these things gives a great picture to a player’s contribution to a team. Can a player get on base, can they get on consistently, and when they do get on base, how powerful are their hits. Similarly, when it comes to pitchers there are many new ways to determine the value of a player to his team. An earned run average (ERA) looks at the average number of runs a pitcher yields per nine innings pitched. It does not account, however, for the stadium a player plays in. Having a low ERA in a “pitchers park” is not nearly as impressive as a low ERA in a “hitters park.” Thus, the so-called “ERA+” is an adjusted ERA that takes into account the stadium one is pitching in. The WHIP (walks and hits per inning pitched) may be one of the most valuable ways to assess a pitcher. It essentially looks at how many runners a pitcher surrenders per inning. Over a season, and definitely over a career, if a pitcher gives up many base runners they will likely give up runs over time. Getting out of jams can happen, but if a pitcher is consistently giving up runs, they will not get out of it. Possibly the most complex and most valuable statistics is the WAR (wins above replacement). This tool looks at how valuable a player is to a team. It explores how many less wins a team might have if the player was traded or injured. Many of the newer tools to determine player value is far superior to the older methods. While the older methods focused purely on individual performance or luck, the newer tools try to assess long-term potential for a player and overall contribution to a team. In this way, baseball experts can not only look at

whether a player is good, but whether they are good in a team.

The Math Behind the Irregularities in Waves Rebecca Massel University of Buffalo mathematician Gino Biondini and UB postdoctoral researcher Dionyssios Mantzavinos published a paper on January 27, 2016 in the Physical Review Letters about the new mathematical ways to describe a wave. In their paper, Biondini and Mantzavinos wrote about wave-forms like light waves in optical fibers to those in the sea, answering a question that has perplexed scientists of years, “What happens when wave patterns have even the slightest irregularity?” For years, researchers have known about “modulational instability”, when these small irregularities can increase and may end up distorting the long-distance wave. “Modulational instabilities” occurs in many cases. Biondini and Mantzavinos contributed to the “modulational instability” phenomenon by proving that different imperfections can transform into one form of a wave format. This form is identified by their exactly asymptotic state. Researchers have known since the 1960s that in many cases such minor imperfections grow and eventually completely distort the original wave as it travels over long distances, a phenomenon known as "modulational instability." But the UB team has added to this story by demonstrating, mathematically, that many different types of disturbances evolve to produce wave forms belonging to a single class, denoted by their identical asymptotic state. Mathematician Gino Biondini said, “Our research is, in a way, an extension of all the work that’s come before… since Isaac Newton used math to describe gravity.” According to Biondini, the first mathematical representation of waves was in the eighteenth century. In the mid-1700s, Jean le Rond d'Alembert used a wave equation to describe the transmission of light, sound, and water waves. Although this was an incredible discovery for its time, it was not foolproof. "The


wave equation is a great first approximation, but it breaks down when the waves are very large -or, in technical parlance -- 'nonlinear,'" Gino Biondini claimed. The wave equation works for short to moderate distances, however, it does not work as well for longer distances. For instance, if one sends an electromagnetic wave through an optical fiber across an ocean, the wave equation does not produce the right estimation. Additionally, if a wave whitecaps, the wave equation no longer works correctly. In the 250 years after Jean le Rond d'Alembert, scientists continued to discover new ways to better understand waves. In the mid20th century, the nonlinear Schrödinger equation was discovered. It is used to characterize and identify wave trains in different contexts. Although there have been many discoveries regarding describing waves, until the UB team’s paper in January 2016, an explanation of what happens when a wave interacts with a small imperfection has yet to be discovered. "After laying out the foundations in two earlier papers, it took us a year of work to obtain a mathematical description of the solutions,” said Biondini. “We then used computers to test whether our math was correct, and the simulation results were pretty good -- it appears that we have captured the essence of the phenomenon." He reported that the next step in the wave-forms discoveries is for the University at Buffalo team to work with experimental researchers and test whether their theories about small irregularities in waves are true when tested with real, physical waves.

Quine’s Paradoxes Sophia Rein In 1961, logician and philosopher, Willard Van Orman Quine outlined 3 three types of mind-bending paradoxes. The first type he described was Falsidical, a paradox that is based on a misconception, and after close analysis, proven to be false. Quine explained that Falsidical Paradoxes “pack a surprise but are seen as a false alarm once we solve the

underlying fallacy.” 2,500 years ago, Greek philosopher Zeno of Elea created a Falsidical paradox stating that Achilles, the fastest runner in the world, could never pass a tortoise in a race. According to Zeno, the fastest runner in the world could never catch up to a tortoise because it is possible to infinitely divide the space between them. Sounds ridiculous? In real life, of course Achilles could catch up to the tortoise! However, Zeno could not prove it mathematically. 2,500 years ago, no one understood that it was possible to have an infinite quantity of numbers that together add up to a finite number. Until 2,000 years later, when Isaac Newton invented calculus, no one was able to mathematically prove that Achilles could overtake a tortoise in a race. Greek philosophers were baffled because they knew from experience that a fast runner could overtake a tortoise in a race, but they believed that even as Achilles approached the tortoise, there would still be an infinitive number of points between them. Next, Quine addresses Veridical Paradoxes, a category of paradoxes initially considered false but later proven to be true. One example is the famous Monty Hall Problem, based on the television show Let’s Make a Deal. The problem goes like this: You must choose between three doors. Behind one door is your dream car, but behind the other two doors there are “zonks,” prizes you don’t want. After you choose a door, Monty opens one of the two doors that you did not pick, revealing what is inside. The door Monty opens always has a zonk. At this point, Monty asks if you would like to change the door that you chose. It seems like you have a 50/50 chance of winning your dream car, whether or not you switch. However, you should always switch, because switching doubles your odds of winning! How? When the game began, you had a 1/3 chance of picking the correct door and a 2/3 chance of picking a door with a zonk behind it. Once Monty reveals that one of the doors that you did not choose has a zonk behind it, the door to which you could potentially switch has a 2/3 chance of being the correct door. Switching doesn’t guarantee winning, but it does make you twice as likely to


succeed. This demonstrates a Veridical paradox: although you are quick to assume that you have a 50% chance of choosing the correct door once Monty opens one of the doors, in actuality, you have a 66.6% of choosing the correct door if you switch. The last type of paradox is an Antinomy Paradox, the most well-known type of paradox. These paradoxes can’t be true or false, but rather “create a crisis in thought” according to Quine. For example, the Grandfather Paradox is an Antimony Paradox. If you went back in time and killed your grandfather, that would mean that you were never born. In that case, how did you go back in time to kill your grandfather? This confusing problem with no solution is an Antinomy Paradox. In 2000, William Quine passed away. He was a brilliant mathematician and an influential philosopher who helped us learn about paradoxes in a detailed and simple way. His work was very clever and valued greatly by later mathematicians and philosophers. The Art Gallery Problem Josephine Schizer If you’ve ever been to an art museum, you’ve seen the omnipresent guards stationed in each room to protect the art. However, I’m guessing you haven’t paid much attention to exactly how many guards are stationed in each room and exactly where they are standing. This is an important decision for a museum: the goal is to minimize the number of guards (so as to save money), but have enough guards that every piece of art is adequately protected. This is where math comes in. In 1973, Victor Klee brought up the art gallery problem: in a simple polygon (whose edges don’t cross each other) with n sides, representing a room in an art gallery, how many guards do you need so that between all the guards, they can see every part of the room at once? There are many considerations to take into account when attempting to solve this problem. First, where will the guards be standing? In most art galleries, the guards stand in the vertices of the polygon representing the

floor, or at least on the edges. You don’t usually see guards standing in the very center of the room. Second, the number of guards required will obviously depend on how many sides the polygon has. Third, the basic idea of this proof is that a guard can see anywhere in the room that isn’t blocked by walls. In geometric terms, this means that if you can draw a line from the guard’s position to a spot in the room, and the line doesn’t go outside of the polygon, the guard is able to watch that part of the room. Mathematician Steve Fisk proved that an art gallery never needs to have more than n/3 guards with guards stationed only in the vertices of the polygon and n representing the number of sides in the polygon. Mathematician Václav Chvátal had previously proved this, but Fisk’s proof was much simpler. According to Fisk’s proof, first, it is necessary to divide the polygon into triangles by connecting the existing vertices. A guard stationed at any vertex of the triangle is able to see everything within the triangle because triangles are convex, meaning that any line segment drawn from a vertex to a point on the triangle must be within the triangle. Therefore, if you divide the polygon into triangles and place a guard at one vertex of each triangle composing the polygon, the entire gallery will be guarded. However, this is still not the most efficient way to guard the gallery. For the next step in Fisk’s proof, the vertices of the triangles should be colored in three different colors so that each triangle has one vertex of each color. Count the number of vertices of each color in the polygon, and choose the color with the fewest vertices. If you place a guard at every spot (blue, in the picture below), the entire gallery will be guarded. We already know that one vertex of each triangle is colored blue, so if you place a guard at every blue vertex, the guards will be able to watch over the whole gallery. Sources: Freiberger, Marieanne. “The Art Gallery Problem.” The Mathematics of Diseases, 3 Dec. 2015, plus.maths.org/content/art-galleryproblem.


“Art Gallery Problem.” Wikipedia, Wikimedia Foundation, 12 Nov. 2018, en.wikipedia.org/wiki/Art_gallery_problem#Fis k's_short_proof.

Figure 7: Fisk's Method

Fisk’s method reveals a number of guards that will allow the whole gallery to be guarded, but it doesn’t necessarily find the minimum number of guards. In other words, you will never need more than n/3 guards, but it may be possible to have fewer than n/3 guards. Furthermore, the applications of this question extend far beyond the specifics of guards in art galleries. In a technological world, this could also be applied to the number of security cameras necessary to guard a room. The problem becomes more complex when it is expanded to a three-dimensional space (can the guards see the ceilings?). Another way to expand the problem would be placing the guards somewhere other than the vertices of the room or allowing them to walk around rather than staying in place. Ultimately, the art gallery problem is interesting to consider and has many interesting applications. The Cosmic Distance Ladder Akiva Shlomovich If I told you that on average, it takes about 8 minutes and 20 seconds for light from the Sun to reach Earth, could you then tell me the distance from the Earth to the Sun? Well, as

some of you know, the equation for distance is (kind of): X=VT, where X is the displacement (distance), V is the velocity of the object, and T is the time it takes for that object to traverse that distance. In this case, the velocity is the velocity of light, which travels at 299,792,458 m/s in a vacuum. Because the speed of light is in units of seconds, we should convert 8 minutes and 20 seconds into seconds, or 500 seconds. Plugging all of those numbers in, we get meters. This is pretty close, but not 100% accurate because 8 minutes and 20 seconds was an average. The distance looks a lot nicer if I write it as ~93 million miles, or one Astronomical Unit (AU). Light is the best way for us to determine immense distances, and it is used all the time. From Earth, we can (and have) bounced light off of other celestial bodies, waited for them to return, and then used the numbers measured to measure the distance from Earth to the object in question. We could measure the distance from any two celestial bodies using meters, miles, planck lengths, or any other unit of length you can think of, but most are impractical. But because light travels at the fastest velocity in the universe, it allows for a more practical unit. The time it takes for light to travel one year is called a light year. Now obviously, there are some exceptions. You wouldn’t measure the distance from Earth to Mars in light years, but if you did, you would find that Mars is approximately 20 light minutes away (54.6 million km is more practical). But are there other methods not involving the speed of light? Yes, but it might be useful to start small, and then go big. Starting with the Earth, how would you measure its circumference? The Greek mathematician Eratosthenes thought of this same question and figured it out with a stick. It was known that if you were to place a stick in the ground in the city of Syene at the right time of the year, the Sun would be directly overhead, and therefore, not cast a shadow. Eratosthenes wondered if the same thing happened in Alexandria, and put a stick in the ground, only to find that there was a shadow. By using the Pythagorean theorem, Eratosthenes measured an


angle of about 7.2 degrees between the stick and the light Ray. Because the Sun is so far away, when the light rays that it emits reach Earth, those rays are essentially parallel to each other. This means that Syene and Alexandria are about 7.2 degrees apart on the 360-degree surface of the Earth. Eratosthenes then had someone measure the distance between the 2 cities, which came out to about 5000 stadia, or about 800 km. We can set up a proportion using the formula for the length of an arc on a circle’s circumference. That formula is: , where X is the angle which subtends the arc in question, and r is the radius of the circle. But because we are trying to find the Earth’s circumference (2r), we will substitute that for a variable. By rearranging, we get , where X is the Earth’s circumference. Simplifying and rearranging gives us X=40,000 km. Earth’s circumference is really 40,075km, but that’s really close considering we used geometry and sticks!

Electromagnetic Spectrum. If light travels from one point to another point, it’s wavelength will not change (if we take out any other external influences). But what if the space between the points is expanding, and one point is moving away from the other? Well, then the wavelength of the photon is going to change. When the two objects are moving away from each other, it is called Cosmological Redshift. Because of the expansion of space, our galaxy is moving away from (some) other galaxies at varying velocities. Because of this, the light from a galaxy moving far away gets stretched by the expansion of space, and its wavelength gets “redder”. It was Edwin Hubble who used this information to formulate what is now known as Hubble’s law. Hubble’s law can be mathematically represented as , where v is how far the galaxy is moving away from a certain point, is the Hubble constant (70km/s/Mpc), and D is the distance between the two points. Mpc is a Megaparsec, or 1 million parsecs. The parsec is another unit of distance equal to 3.26 light years. That means that if a celestial object is 1 Mpc away from us, then it is moving away from us at a velocity of 70km/s. To solve for distance, rearrange to get . If we are told that a galaxy is moving at a speed of 6410km/s, we get: . The km/s cancel out, and we get ~91.5 Mpc.

Figure 8: Stick Method

There are so many ways of measuring distances in the universe, too many to write about, so I'm going to choose a method which I think leads to more questions than answers, but that’s what makes it amazing. But before we get to that, there is a bit of background information that we need. Light, or photons, are waves, and therefore are distinguished by their wavelength. Wavelength is denoted by the Greek letter Lambda ( ). I'm sure you’ve heard of the different wavelengths, but you just didn’t know it. All of the wavelengths are a part of the

Figure 9: Light Wavelengths

There a plethora of different methods for finding the distances between us and another object. These include methods like using Cepheid variable stars, finding the color spectrum of a star, and another method called parallax. I highly recommend you look into these methods so that you can begin to understand how vast the universe really is.


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.