Contents 01 Editorial 02 Paradox of Predictability An insight into a super-computer predicting the future, and a human tackling it at every step
03 The Surprise Test Problem Learn how to foresee a surprise test
04 Fermat’s Enigma Know the proof and the controversies surrounding the proof for “Fermat’s Last Theorem”
05 Paul Erdos 90% of the world’s mathematicians have an “Erdos Number”. Do you?
06 Game Theory Analyze what gives you the maximum profit in any situation using the “Game Theory”
07 The Joker - Making of a Game A look at how the Joker manages to flaw the “Game Theory” and make his own game
08 The Quest for Pi A discussion on one of the most interesting constants in the world of Mathematic has been talked about here
09 God’s Equation eiΠ + 1 = 0? An equation that contains the most important numbers in Mathematics
10 Pyramids of Giza The application of Mathematics behind one of the seven wonders of the ancient world
11 Eiffel Tower Making of the complicated, yet stupendous structure of the Eiffel Tower
12 Base Rate Fallacy Mathematically reverse the percentages for anything
13 Birthday Paradox Calculate the chance of your birthday clashing with someone else’s, just to see the chance disappear
14 Four Color Theorem In continuation with the previous issue, the article will take your knowledge about the theorem a step further
15 Conway’s Game of Life Is this one really a game?
16 Crossword Solve it and take away the prize!
Founder’s Day, 2011 Issue
The Infinity
Editorial comprehendible article. Moving on, the concept of ‘Game Theory’ has been discussed along with the interesting examples of ‘prisoner’s dilemma’, and the detailed description of how the opening scene of the movie “Batman: The Dark Knight” portrays “Game Theory”.
Another exciting and rather exhilarating term is coming close to its end. Unlike the others so far, this term ends not only my SC Form in school, but also my tenure as the Editor-in-Chief of this publication. Having been a part of the Infinity for four years now, I have seen the Infinity evolve from an eight-pager black and white issue to a twenty-pager colored one. The first colored issue of this publication came under the leadership of Devashish Singhal. He gave the magazine a different outlook and set the benchmark for the years to come. This issue of the publication can be termed as a fruit of sheer hard-work put in by each and every member on the editorial board. Right from the juniors who gave in the articles, to seniors on the board, who edited them, and themselves wrote a few; all must be highly credited. The standard set by Devashish has been maintained by this issue of the publication. In this issue, we have covered topics which are of great diversity and will hopefully interest a wider cross-section of readers. A lot of original ideas have come in this time for the articles, the design and the cover page. Also, the first article ‘Paradox of Predictability’ by Revant Nayar was originally a research paper made by him. I must commend the way he later edited the research paper into an easily
Furthermore, factually fascinating articles in the form of “Fermat’s Enigma” and “God’s Equation” provoke the reader’s curiosity to explore every aspect of the subject. The mathematic involved in two of the most sophisticated of the world’s Seven Wonders, namely the Eiffel Tower and the Pyramids of Giza, have been discussed in detail in this issue. To think of such an advancement in technology which made it possible to create such structures in such ancient times still baffles me. Later in the publication, there are articles like "BaseRate Fallacy" and "Birthday Paradox". These articles are amusing as you can try experimenting with the information given for yourself and the world around you. For the first time in the Infinity, continuity has been maintained in the form of Udbhav Agarwal's take on the much controversial “Four Color Theorem”. In the previous issue as well, he wrote an article revolving around the same topic. Towards the end, “Conway's Game of Life” gives you an insight into a game-cum-simulation program that runs on the computer. The end showers you with a mind boggling fact revolving around the prime numbers and a very perplexing crossword that will get you thinking. This remarkable issue wouldn't have been possible had it not been for my colleague Aviral Gupta. He is the one behind all the hard-work that went into designing the Infinity. He sat late at night for hours making this issue a success. A word of thanks goes to him… I hope you enjoy reading this issue!
1
The Infinity Founder’s Day, 2011 Issue
Paradox of Predictability - Revant Nayar In this article, I shall expound on a paradox that exposes an inherent logical fallacy associated with a deterministically complete model of the universe. This may not be a completely new concept, and it does not have significant ramifications for theoretical physics yet. Yet I shall hope to illumine certain unexplored facets of the paradox in question, and draw fresh conclusions about it with finality. I will also remove any doubt regarding its authenticity as a paradox, which may result out of certain errors of intuition.
To obey laws of determinism, a model of a super-computer which knows all of the universe and its state at a given point in time is created. This paradox, which we shall for all practical purposes refer to as the 'paradox of predictability', rules out the possibility of the hypothesized deterministically complete model of the universe being created within the confines of the universe. Determinism is rooted in the belief that all events in the universe, without exception, are determined by a series of fixed 'laws'. Hence if one knows the state of a system at time t1, as well as the laws governing its evolution, one can fathom its state at t2 (where t2 > t1). Hence, as Stephen Hawking claims, had there been a supreme mathematician that existed at the beginning of the universe who knew its state then and its laws in their entirety, he could pinpoint the state of the universe at any subsequent point of time. Such a mathematician would be able to predict all events that occur in all parts of the universe at all given points of time. The field of science has dedicated itself to, as far as possible, the creation of such a model that is aware of all conditions and laws. Now for the purpose of discussion, let us assume that this model is in fact created in the form of a super-computer which knows all laws of the universe and its state at a given point in time. Such a computer then can, in theory, predict all events that occur at any subsequent point in time, including the events that occur on Earth. The computer in question will be able to pinpoint the exact
2
occurrence of storms, hurricanes, accidents, scientific and academic developments, and all events right up to the thoughts and actions of individual human beings. Please note that I here assume complete and unhindered predictability and the absolute rule of law in the universe, which even infringes upon any notion of 'free will' that is not causally pre-determined. In such a case, of course, the computer will inform you about your course of action in the future; all the thoughts, actions and events that will henceforth characterize your life. Now we come to the key question, which spawns most of the intuitive problems. What if you voluntarily choose to defy the computer's predictions? For instance, if the computer calculates that you will throw the pencil in your hand at the wall, precisely three seconds from the current instant, it may seem perfectly natural that you may withdraw your hand and choose not to. However, can you actually defy the predictions of such a computer, and render its predictions wrong? Does this not mean that the computer is wrong? As soon as you change your mind about throwing, what if it changes its own decision? Yet you may then choose to defy its new prediction, which it must take into consideration as well. Clearly, the computer must completely cognize the effect that it will have, on your decision. Yet this is not of much help if you are intent to defy all predictions made by the computer. If it predicts action A and you choose to conduct mutually exclusive action B, then it might seem inevitable that the computer must be proven wrong.
The computer calculates a certain move made by you after a second. But as soon as you change your mind about making the move to prove it wrong, what if it quickly changes its own mind as well? An argument that might seem appealing in such a case is that your circumstances would somehow force you to follow the predicted course of action; to choose action A in
Founder’s Day, 2011 Issue
spite of all pretensions of free choice. Yet I shall show why this is not so. The difficulty here remains that your choice is influenced by various factors, including the effect of the computer's decision. For the computer to judge the circumstances affecting your choice, it must fathom all other circumstances in addition to the effect of its own prediction on your choice. Intuitively, the fallacy is clear in that the computer is making a prediction based on the prediction itself that it has not made. Now let the computer's choice be represented by 'c' and all the other factors affecting it by 'o'. Then the factors affecting your decision as computed will be '(c+o)'.However, 'c' in turn will be determined by 'c+o', and so on. Thus the total number of factors affecting your choice would be 'o + (o + (o‌ +c)'. This poses two problems. The first, of course, is that of the presence of an infinite regressive series, which cannot be calculated by any finite machine. The second problem is that the entity 'c' remains undefined as it is. Please remember that 'c' cannot be defined in reference to itself. It must be defined with respect to certain different factors, which we do not in this case have. Even if we consider a universe in which only you and the computer exist (i.e., o=0), we end up with the statement 'c=c'. Although it is a true statement, it tells
The Infinity
us nothing about the value that 'c' will assume, and hence about the subsequent predictions of the computer. Thus, we can see that the existence of such a machine is not logically possible. We can also hence attempt to conceive of this hypothetical computer as being extant in the actual universe. Again, such a machine must consist of components storing information that replicates the actual processes in occurrence. Even then, there must be a quantum of information that must account for the behavior of the machine itself, which is where the problems of infinite regression and the undefined 'c' settle in. One can now observe that the infinite 'o' s would have to mean breakup of each subsequent 'c' into a c' and an 'o', which must go on to infinity. This violates the rule of the finite divisibility of matter, which states that neither matter nor information (which is also carried by matter) is infinitely divisible. Hence we can claim that this computer too cannot exist, either in theory or in physicality. Logic does, however, permit its existence outside the confines of the universe, where it cannot affect any event inside the universe. Even though this idea does not deal any significant blow to determinism, it does introduce a rather interesting logical paradox.
The Surprise Test Problem - Revant Nayar This problem is one of those that seems baffling at first, and indeed appears to be a paradox on first glance. Is it possible for a teacher to give a 'surprise test', which can be held at any time from Monday to Friday? Ostensibly yes, because the mere act of the concealment of the day on which the test is to be held would render it a 'surprise' examination. By definition, a surprise examination is one that occurs without the prior knowledge of the students. For a start, however, we might observe that Friday cannot be the test date in any case. That is because if Thursday passes without a test, then the students know that the test will indeed occur on Friday. This prior knowledge would mean that the test is no longer a 'surprise' test by definition. Hence we have ruled out Friday as a potential test date for the surprise examination. Now, however, Thursday is the last possible date for the
surprise test. Yet you will now observe that the same problem arises, as the test cannot now occur on Thursday as well. That is because if Wednesday passes without a test, the students will now know that the test occurs on Thursday. Hence again the test cannot occur on Thursday, making Wednesday now the last possible day for the surprise test to be held. By now, it would be easy for you to observe that this process of regression will continue, until there will be no possible date left on which such an examination can in fact be conducted. Thus, according to definition, the surprise test itself is really not possible. Is it truly so then, that such a test cannot occur at all? Not really, because for instance if the test papers are handed out on the Wednesday, there is not much the students can do to preempt the occurrence. Mull over the problem. It will give some food for thought.
3
The Infinity Founder’s Day, 2011 Issue
Fermat's Enigma - Ujjwal Dahuja Fermat's conjecture was perhaps the most difficult mathematical problem until a few years back. It is one of those problems, which can be understood even by a seventh grader. Even though the problem is very easily comprehendible, the very thought of it being the toughest mathematical problem makes people think twice about it. The conjecture was put forward by Pierre De Fermat and hence the name of the conjecture is after him. Fermat put forward this conjecture and claimed to have proved it though we have no existing evidence that Fermat did end up proving it. In all possibility, he might just have made a subtle mathematical mistake considering the length of the proof Andrew Wiles later came up with. Fermat's conjecture states
x n + y n = z n for n greater than 2 zFermat stated that the above formula doesn't have any solutions. Fermat was working on Diophates' Arithmetica, which was a mathematical classic of a very high stature when he put forward this conjecture. Fortunately for mathematicians to follow, the margins of 'Arithmetica' were large enough to enable Fermat to make little notes of what he thought. It was a time when mathematicians had a knack of teasing each other with mathematical problems they were able to decipher. Fermat too believed in this idea of keeping mathematical knowledge private, an idea that originated during Pythagoras's time. Pythagoras's board of mathematicians was an intensely private clique, which did not open up its discoveries until many years after Pythagoras's death. Pythagoras was of the opinion that the world of numbers was limited to real numbers. The whole idea of complex and imaginary numbers came into mathematics in Egypt and India after the death of Pythagoras. Coming back to Fermat's conjecture, Fermat wrote on the margins of 'Arithmetica' that the case of the number three as ‘n’ could not satisfy the condition, after which he wrote the most annoying line for mathematicians to follow: - “ I have discovered a truly marvelous demonstration of this proposition which this margin is too narrow to contain.” With these last words, Fermat put forward a conjecture, which every aspiring mathematician aimed to prove. The reason it is called a conjecture is that there is no evidence that Fermat
4
himself had a perfectly mathematical solution to this problem. It was Andrew Wiles, a professor at Princeton University, who finally did come up with the proof in 1993. Young Andrew Wiles first saw the problem at the age of 10 in a public library in his town. He dreamed of solving the problem but it was an innocent dream, a kind that every ambitious ten year old has. It was in 1975 that Andrew Wiles began his career as a graduate student at Cambridge University. It was at Cambridge where Wiles encountered John Coates who was Andrew's Ph.D. thesis supervisor. John Coates recommended Andrew to base his thesis on elliptic curves. This decision would eventually prove to be a turning point in Wiles' career and give him the techniques he would require for a new approach to tackling Fermat's Last Theorem. Elliptic curves are neither ellipses nor curved and hence they were sometimes referred to as elliptic equations. Andrew Wiles himself said, “They (elliptic equations) are very far from being completely understood. There are many apparently simple questions I could pose on elliptic equations that are still unresolved. In some way all mathematics that I've done can trace its ancestry to Fermat, if not Fermat's Last Theorem.” Gradually, Wiles moved into a phase of proving the Taniyama Shimura conjecture, which aims to prove that elliptical equations and modular forms are effectively one and the same thing. It became known to the mathematical world that if the Taniyama Shimura conjecture could be proved, the very theorem could be solved. So, Andrew Wiles took it upon himself that he would try every method and discover newer methods to reach the solution of the conjecture. Interestingly, Wiles used the proof by mathematical induction and proof by contradiction, which we students are familiar with. Wiles was working on a newly developed Kolyvagin-Flach method to approach Fermat's last theorem. The theorem was on the verge of being proved and booksellers had begun to sense it coming. After an isolation of seven years, Andrew Wiles came up with the most profound and important revelation to the mathematical world. It was in a lecture hall in Cambridge, Andrew's birthplace, that Wiles over a period of two days of lecturing announced to the world t h at h e h a d p roved Ferm at 's la st t h eo rem . Mathematicians were left astonished and perplexed. Centuries of trial had condensed to a proof at last. It was perhaps the first time that a mathematician had made it
Founder’s Day, 2011 Issue
to the front page of newspapers. However, the tale didn't end here. The team of experts analyzing the mathematical validity of the proof found a glitch. Wiles thought that he could get the error right within a period of a few months but he seemed to falter in his attempts. The error was in the Kolyvagin-Flach methodology. No matter how much Wiles tried, but he didn't seem to get the proof. On top of that, Wiles received a mail (it was a hoax) that a solution to the theorem had been found. Devastated and defeated,
The Infinity
Wiles had almost given up on the proof. He had planned that he would give up on the proof if it couldn't be proved by his wife's birthday. In a seemingly melodramatic manner, Wiles did manage to correct the proof days before his wife's birthday. He did by linking the Iwasawa theory to his newly found one. Two hundred pages of pure mathematical genius had proved Fermat's last theorem! The world's most difficult mathematical problem had been solved and it was Andrew Wiles who managed to prove this.
Have You Got an Erdos Number? - Aditya V. Gupta Paul Erdős was a Hungarian mathematician, who published more papers than any other mathematician ever in the history of mathematics. However, he published these papers in collaboration with a number of other people. Combinatorics, graph theory, number theory, classical analysis, approximation theory, set theory and probability theory were the main topics that Paul worked on. But what makes Paul Erdős actually famous is his 'eccentric' thinking which led to the development of the Erdős number. Interestingly, possessions were of little importance to Paul who managed to fit a major portion of his belongings in a suitcase in order to suit his wandering lifestyle. The most distinguishing aspect of his personality was the fact that he rarely ever stayed in his own house. Paul spent a major portion of his life traveling. He flitted between scientific conferences and homes of colleagues all over the world. Paul would show up at a soon-to-be collaborator’s doorstep and announce: 'my brain is open'. Paul would then stay with the person and generally leave after writing a few papers in collaboration with the host. Sometimes he would also ask the person he was working with about whom he should visit next! One of his colleagues, Alfréd Rénvi, said,“a mathematician is a machine for turning coffee into coffee”, and Erdős completely fit that definition of a mathematician because he drank copious quantities. There was something unusual about the man. He would conjure up his own vocabulary with references to Mathematics. He referred to children as 'epsilons' because in mathematics, particularly in calculus, an arbitrarily small positive quantity is commonly denoted by the Greek letter 'ε'. He referred to women as 'bosses' and men as 'slaves'. For him the people who had stopped doing math had 'died’, while the ones who had physically
died had 'left'. Alcoholic drinks were 'poison' while music was 'noise'. People who had married were 'captured' while those who had divorced were 'liberated'. To give a mathematical lecture was 'to preach' while to give an oral exam to a student was 'to torture' him or her! At the very best, he was an innovator and a non-abider of social customs. On a more serious note, Erdos' greatest contribution to mathematics has been the Erdős number. In fact, if I may correct myself, it was his friends who had created the Erdos number as a humorous tribute to the Erdos whose a sense of humour and belief in mathematics, kept him going. The people with whom Erdős had worked personally were assigned an Erdős number of 'one' while the people who had worked with the people who had an Erdős number of one had been given an Erdős number of 'two' and so on. Erdős himself was the only individual who had been assigned an Erdős number of 'zero'. The Erdős Number is still looked up to as an honour and mathematicians who have an Erdős number are treated with great respect and are recognized as distinguished mathematicians. Approximately 200,000 mathematicians have an assigned Erdős number and it is estimated that 90% of the world’s active mathematicians have an Erdős number smaller than 8. As always, there too is an exception to the allocation of Erdős numbers. Baseball Hall of Fame, Hank Aaron, has an Erdős number of 1 because he had autographed the same baseball along with Erdős when Emory University awarded them honorary degrees on the same day. As rumors say it, the Erdős number was most likely first defined by Casper Goffman, an analyst whose own Erdős number is 1. Goffman published his observations about a prolific collaboration in a 1969 article titled 'And what is your Erdős number?'
5
The Infinity Founder’s Day, 2011 Issue
Game Theory - Ritesh P. Shinde Imagine two criminals held under the suspicion of committing a crime, though the police have no proof against them. In order to solve the mystery, police uses a trick. The two criminals are kept away from each other and are individually asked to give evidence against the other. The one that does so will be freed while the other will be accused of the crime. If both of them refuse to do so, both will get a small punishment; which will be equivalent to winning a game for them. On the other hand, if they betray each other, they both will be punished as the police will have proofs against each of them. Here each prisoner has two options and he has to take a decision in order to win the game, without knowing the other prisoner's choice. What should they do? This is a perfect and most widely known example of situations involving the game theory. Developed by John Von Neumann, a renowned mathematician, the game theory is a part of applied mathematics and is related to an economic theory. This economic theory studies and explains the behavior of 'economic agents' and the interactions amongst these agents under certain circumstances. These strategic situations are colloquially referred to as 'games' in mathematics and in economics and the agents involved in the games are referred to as 'players'. The games studied under this theory are particular situations in which an individual's success depends upon the choices made by him as well as others. Thus, the game theory mathematically analyzes the behavior of individuals in order to determine the optimal course of action for a particular player in a competitive situation guided by fixed rules. What I mean by 'fixed rules' is that there are some factors which are needed to be assumed beforehand in order to apply the game theory. Firstly, one has to assume the basic conditions necessary for the application of the game theory i.e. every player has a number of choices before him from which he gets to choose the one which maximizes his gain. Secondly, every combination of choices made by the players will lead to a well defined end state of the game. One also has to assume that all the players of the game are 'economically rational'. An economically rational player refers to the one who can assess outcomes, calculate outcomes and make choices which leads to one's preferred outcome. One more assumption must be taken into account, i.e. each player's aim is to maximize his profit. One also has to neglect the social concepts such as belief, ethics, trust and honesty as they don't play a role in the game
6
theory. The example given at the beginning of this article is called the 'prisoner's dilemma'. In the above game, if the ultimate aim of both the players is only to minimize their individual punishment, the game becomes a 'zero sum game' in which an individual benefits only at the equal expense of others. Zero sum game refers to a mathematical condition where one's gain/ loss are balanced by other's gain/loss. The game theory for a particular prisoner in this case is only concerned about how he can maximize his own payoff without any worry about the other prisoner's payoff. A 'Pareto suboptimal solution' will occur in the above case. Pareto suboptimal solution refers to a rational solution in which the players betray each other for their own selfish good, although the outcome would be more beneficial had they cooperated with each other. The game theory is not a theory of chance but of strategy. The traditional applications of the game theory often seek equilibriums in these games. The most famous equilibrium concept is the Nash equilibrium developed by John Nash. It is a concept in which one has to assume that every player knows the equilibrium strategies adopted by all the other players and none can benefit by only changing one's own strategy. These equilibrium concepts are custom made and are changed in accordance to where it is applied. The game theory, when developed initially, was only applicable in some cases which included zero sum games. This is due to certain limitations in the mathematical framework of the theory. However, the situation at present is completely changed. This is due to the advancement in mathematics. This is also because many renowned mathematicians across the globe have worked into this theorem and have, in effect, increased its applications. The game theory can be applied in other games ranging from friends choosing a place to go for dinner, to businesses competing in a market. As the theory can study how people interact, today it is applied in fields such as social sciences, biology, engineering, computer science, psychology and most notably in management and economics. Advanced concepts of game theory can be found at the heart of foreign policies, nuclear weapon strategies, international relationships and political situations. I personally find the game theory an interesting one as its principle is basic but its applications are far reaching. Sooner or later, one's life will revolve around the game theory and one will knowingly or unknowingly apply it in one's habitual life.
Founder’s Day, 2011 Issue
The Infinity
The Joker: Making of a Game - Vinayak Bansal As said in the previous article, the game theory has wide applications. We ourselves would notice such strategies in the world around us that are based on the game theory. The entire film Batman, The Dark Knight is a sequence of games and an illustration of strategic thought. There is a lot of “innovation” going on in the film. In this movie, the game theory is also portrayed at times even when it is completely unclear what the “game” actually is, like the opening scene that shows the bank robbery. So here is the detailed analysis of the movie's opening scene. The movie started with the planned robbery of a bank by six robbers. In the beginning, there are robbers sitting in a car, describing how the loot will be divided. It is apparent that they are not satisfied with the plan, which can be seen from their dialogues: Driver: Three of a kind. Let's do this. Passenger: That's it-three guys? Driver: Two guys on the roof. Every guy gets a share. Five shares are plenty. Passenger: Six shares. Don't forget the guy who's planned the job. The robbers feel the need for fair division, as they don't like it when the sixth person, the Joker, gets to keep an equal share for unequal work. The particular issue of fair division being addressed here stands central to the game theory. Even historically, the problem of fair division remains invariably the first ever to be approached using the game theory. Fair division is about considering reason and calculated thought. This topic of discussion also raises two other concerns: How can self-interested people be trusted? How can cooperative outcomes be achieved with diametrically opposite moves? As far as the movie is concerned, the robbers eventually agreed on an equal division for unequal work done. In the real-life scenario, they should have done so, given that they had banking on things to go according to the plan. Unfortunately, they should not have made such decisions taking into account what the game theory say; about taking into account the incentives and possible land mines about trust. This brings me to one of the key concepts of the game theory: In the game theory, ABC's trust with XYZ is not based on the agreement with XYZ on ABC's plan. ABC can trust XYZ because it is in XYZ's selfinterest to help ABC. If the robbers would have considered these issues, their
outcomes would have been different. A little bit of thinking into the future and reasoning in reverse would have established to the robbers the flaws made by the Joker in his plan for his own benefit; the robbers' loss. A discussion on the flaws in detail follows: The first and foremost plan made by the Joker is flawed, which is of fair division. Joker designs the plan such that it provides each robber to kill a fellow team member, in order to increase his own share. Once a robber's job has been performed by him, he is of no further value to the team. As planned by the Joker, he instructs the teammates to kill the ones whose jobs are done in such a manner that no particular robber knows of his own death to occur. This game would have been different if the robbers would have repeated the crimes together. In this manner, an even split could have been sustainable. But the Joker has his own plans in the movie, due to which such a game is not played. In the movie, the robbers don't notice the fact that they could be the victim of the same deceit that they played on others. To prove this point, I am taking another instance from the opening scene of the movie. The scene is about the alarm guy and the second robber on the rooftop. Here, the second robber kills the alarm guy as soon as the alarm guy completes his job of disarming the alarm. The second robber then proceeds to perform his own task. Little does he know that the same thing was going to happen with him as well. Once the second robber disarms the bank vault, even he gets killed: Third Robber: Where's the alarm guy? Second Robber: Boss told when the guy was done, I should take him out. One less share, right? Third Robber: Funny. He told me something similar. Second Robber: What? No! No! [Gets shot] According to Joker's own plan and the game that he designs, it is clear that the Joker had wanted everyone dead. Towards the end of the robbery scene, we realize that Joker has been on the job from the very beginning. The plan he makes involves two more deaths, which Joker again has designated to be a part of the plan, after which he escapes in the bus. Overall, the Joker was himself able to bribe the weaker robbers one by one into the systematic, sequential killings of all the robbers. The entire game is twisted by the Joker in the end, when he himself alone takes up all the money.
7
The Infinity Founder’s Day, 2011 Issue
The Quest for Pi - Aditya V. Gupta The circle is a very enchanting shape. It has held captive mankind's often-wavering interest since a long time. People of all kinds- professional mathematicians, amateurs and all those even simply interested have felt the desire to “square the circle”, i.e., to construct a square with the same area as a given circle. To do this, one must know the ratio of a circle's circumference to its diameter. Since the time of Leonhard Euler this ratio has been denoted by the Greek letter π (pi). If we go back to the ancient world, nearly all the people used the number 3 for the ratio of a circle's circumference to it's diameter or in other words as pi as evidenced by a passage from the Old Testament which read 'Also, he made a molten sea of ten cubits from brim to brim, round in compass, and five cubits the height thereof; and a line of thirty cubits did compass it round about.' In cases where a higher degree of accuracy was needed, a range of values were used. The Babylonians used 3.125 or 3 1/8 as found from one of their tablet of sunken clay. The men of ancient Egypt made use of a solved problem that states that the area of a circle of nine length units in diameter is the same as the area of a square whose side is eight units of length, which gives 3.160 49…. The Greeks made a very important contribution to our knowledge of pi in the form of the “method of exhaustion, “which is attributed to Eudoxus. In the 2nd century Hipparchus computed an extensive table of chords and proposed the value of pi as being equal to 3.141 666… which was impressively correct to 4 decimal places. Archimedes, who is regarded as the greatest scientist-mathematician since antiquity, applied his own method for calculation of arc length to determine pi and even though no definite answer was given, Archimedes gave a very close approximation for the value of pi. He calculated that pi was such that 3.1408…< π < 3.1428… and stated that “ The ratio of the circumference of at circle to its diameter is less than 3 1/7 but greater than 3 10/71.” For the next thousands of years mathematics progressed very slowly and mathematics instead of being regarded as a mainstream subject, was believed to be a n o f f s h o o t o f a s t ro n o my. H o w e v e r i n t h e Southeastern part of the world, mathematics pro-
8
gressed at a considerably fast pace. Aryabhatta, in 499 A D, fo u n d t h at π = 3 . 1 4 1 6 … w h i l e a f te r h i m Brahmagupta used pi as square root of the number 10 which is equal to 3.1622…. Closely following was Bhaskara's proposition of the value of pi as 3.141 56…. China's Liu Hui published that pi is equal to 3.141 59…. while Tsu Chung-Chih said that pi is equal to 3.141 592 9… , which is surprisingly correct to six decimal places. Fibonacci then studied the 96-gon from which he deduced that pi is equal to 3.141 818… Which was correct to only three decimal places. In Persia, Jamshid Masud al-Kashi in 1424 published his “Treatise on the Circumference” with the result of his calculations for an inscribed 3. 228-gon. He found out pi as 3.141 592 653 589 793 25…. which was astonishingly correct to sixteen decimal places surpassing all the previous determinations for pi. With this value al-Kashi calculated the error in the calculation of the Earths circumference as 0.7mm, which was smaller than the thickness of a horse's hair according to an old saying. But in the 1600s, with the discovery of calculus by Newton and Leibniz, a number of substantially new formulas for pi were discovered. The value of pi was calculated by employing the trigonometric identity π/4 = tan − 1(1/2) + tan−1(1/3). Shanks used this scheme to compute pi to 707 decimal digits accuracy in 1873. Alas, it was later found that this computation had an error after the 527th decimal place! In a roughly similar fashion, pi can be computed by the formula π/6 = sin−1 (1/2). Newton himself used this particular formula to compute pi. He published 15 digits, but later sheepishly admitted, “I am ashamed to tell you how many figures I carried these computations, having no other business at the time.” Such formulae gave the value of pi somewhat close to its original value. However they were not very efficient for computing the value of pi. But these formulae were important because of their theoretical implications and have been the base for notable research questions, such as the Riemann Zeta function hypothesis, which is being researched even today. In today's era, pi barely poses as a challenge for mathematicians because the 6.4 billionth decimal place of pi has been found. Even so, for most calculations the value of pi is estimated as 3.14.
Founder’s Day, 2011 Issue
The Infinity
God’s Equation - Mohit Gupta Being a part of the most astounding formulas in mathematics, Euler's identity is popularly known as the God's equation. Some people also go as far as calling it the mathematical equivalent of Da Vinci's Mona Lisa or Michaelangelo's David. Named after Leonhard Euler, the formula establishes the deep relationship between trigonometric functions and complex exponential functions. According to the formula, for any real number x: ix
e = cos(x) + i sin(x) In the above formula, e is the base of the natural logarithm, i the imaginary unit called iota. Cosine and Sine are trigonometric functions (the argument ‘x’ is to be taken in radians and not degrees). The formula applies even if x is a complex number. Particularly with x = π, or half a turn around the circle: iΠ e = cos π + i sin π Since the value of cos π = -1 and sin π = 0, it can be iΠ deduced that e = -1 + 0i which brings us to the identity iΠ
e +1=0
Courtesy: http://en.wikipedia.org/wiki/euler_identity/
The identity successfully links five fundamental mathematical constants: 1. The number 0 (the additive identity). 2. The number 1 (the multiplicative identity). 3. The number pi (3.14159265…). 4. The number e (base of all natural logarithms, which occurs widely in mathematics and scientific analysis). 5. The number i (the imaginary unit of the complex numbers) The formula describes two equivalent ways to move in a circle. One of its major applications is that in the complex number theory.
ix
The interpretation of the function e can be that it traces out the unit circle in the complex number plane while x ranges through the real numbers. Argument of ‘x’ in this case refers to the angle that any line that connects the origin with any point on the circle makes with the positive real axis (being measured in radians counter clockwise). Here, points in the complex plane are represented by complex numbers that are written in Cartesian coordinates. Euler's formula provides a means of conversion between polar coordinates and Cartesian coordinates. The number of terms is reduced from two to one by polar form, which simplifies the mathematics for multiplication or powers of complex numbers. Therefore any complex number, z = x + iy can be written as: z = x + iy = |z|(cos ø + i sin ø) = re
iø
-i ø
z¯ = x - iy = |z|(cos ø - i sin ø) = re In the two equations, x = Re {z}, the real part y = Im {z}, the imaginary part r = |z| = √(x2 + y2) = the magnitude of z ø = arg z = arc tan(y , x) 'ø' here is the argument of ‘z’, or in other words, is the angle (clock-wise and in radians), between the X-axis and the vector form of ‘z’. Using the Euler's formula on the derived formula, the logarithm of a complex number can be defined. For this, we need to take the definition of the logarithm (as an inverse operator of exponentiation): ln(a)
a
b
a+b
a=e and e .e = e Both being valid for any complex numbers a and b. Therefore, it can be written that: iø
–iøa
ln(a)
z = |z|e = e – i sin(ø) = re =e For any number z ≠ 0. On taking the natural logarithm function (with the base e on both sides), it can be seen that: ln z = ln |z| + iø This in fact can also be used to define the complex logarithm. The logarithm of a complex number is therefore a multi-valued function as φ is a multi valued variable. Finally, the other exponential law: a k
ak
(e ) =e which can be seen to hold for all integers k, together with Euler's formula, implies several trigonometric identities as well as de Moivre's formula.
9
The Infinity Founderâ&#x20AC;&#x2122;s Day, 2011 Issue
The Pyramids of Giza - Varun Gupta Courtesy: http://www.eworldtraveltourism.com/wp-content/uploads/2011/03/Pyramidsand sphinx.jpg
We call ourselves 'developed'. We call ourselves 'modern' and 'way ahead of our ancestors'. I looked at the beautiful picture of the Pyramid of Giza and was awed by what the ancient Egyptians had created. But when I looked at the complicated mathematics and the kind of accuracy achieved 4500 years back, when there was no one to award the Egyptians Nobel prizes, my jaw dropped. The kind of mathematical knowledge the Egyptians had in those days is bedazzling. The structure and dimensions of the Pyramid of Giza are based on components of mathematics such as the golden ratio, the pi ratio and the Pythagoras' 3-4-5 triangle. In the structure, besides the angles being perfectly 90 degrees and the lengths being exactly the same when needed, the perimeter of the base of the pyramid (which is a square) is exactly equal to the circumference of a circle whose radius is exactly equal to the height of the pyramid. The reason why the Egyptians made such relations and accurate measurements was not 'just there', it was all conscious, conscious because they found these dimensions to be 'aesthetically pleasing'. The Golden Ratio is a ratio which pervades many parts of nature today, even the dimensions of our arms. Artists like Leonardo Da Vinci have used it in their artwork, but it was all born from the Egyptians. In the Pyramid of Giza, the distance from the top to the middle of one side is and half the length of the base of the structure form a 'phi relationship' or a golden ratio, which is approximately 0.618.
10
This is not it. The architecture of the very king's chamber had intricate mathematics incorporated in it. The perfect idea of right angles allowed them to incorporate Pythagoras' 3-4-5 triangle in the dimensions of the king's chamber. On the other side of the chamber, on the corner, is incorporated a 2-3-root 5 triangle. It needed the mathematician of what we call the modern world, 400 years later, to conclude that because of the floor being inset from the walls, the walls exhibit two heights, one to the floor surface, one to the true base of the wall. This is what allowed the formation of the two triangles on both ends of the chamber. Imagine a world, which did not have the geometry box we so easily carry in our school bags, a world which did not have the concepts of calculators, or the many theories we can so easily refer to at the click of a button. Certain studies in universities propose the idea that ancient Egyptians and some other people of the world had mathematics imbibed in them, i.e. they were born with a 'subconscious sense of mathematics', which allowed them to create such wonders without the use of the sophisticated technology that we today have. True, that these studies might turn out to be true, and Egyptians might have had certain senses of mathematics 'imbibed' in them, but does this mean we as human beings are degrading intellectually and in terms of mental capabilities? Can mathematics be a measure of this degradation?!
Founderâ&#x20AC;&#x2122;s Day, 2011 Issue
The Infinity
Eiffel Tower The 1,063 feet (81 storey) monument designed and built by the French architectural genius Gustave Eiffel in 1889 became a benchmark for designers around the world to live up to by becoming the world's highest man made structure at the time for a staggering 41 years. It is still the highest in Paris and is also the most visited monument in the world. But most importantly it is an amazing projection of the well projected mathematics calculated by Gustave. In the last issue of the Infinity (Issue No 9; Spring Term, 2011), I wrote about the mathematics involved to perfect forts used for military purposes during the last few centuries. During that period, another memorable architectural event took place when Gustave Eiffel was assigned the daunting task of building a grand entrance and memorial for the 1889 World's Fair held in Paris. This task required immense attention from the Frenchman, taking into consideration the threats that the building would face after being erected. A number of problems obstructed Gustav in his process. In this article I will tackle the question of the real solution that Gustave made to overcome the extreme wind conditions of Paris. The force of the oncoming winds was truly the most daunting challenge that lay in the path of Gustave. Nevertheless he was determined to make the world's first building to surpass the 300 meter mark, but as Professor Patrick Weidman from the University of Colorado pointed out more than a century later, Gustave could have only relied on practical experience and not on equations and mathematical functions. Weidman was among the first to investigate this matter. He came across the problem when he received a book titled Advanced Engineering Mathematics in 2001. The introduction contained in it a non-linear integral equation by an Eiffel Tower French aficionado who challenged engineers and mathematicians across the world for a solution through a website. After some perseverance, Weidman did find a solution for the equation in terms of mathematical functions, but it was not applicable in the case of the Eiffel Tower because the curvature of the solution did not match that of the tower. Two years later, Weidman was introduced to Professor Iosif Pinellis at the Michigan Technological University. Pinelis had volunteered to help Weidman decode the equation, but came to the conclusion that all possible solutions to the equation would require to be parabola like or 'explode to infinity' at the peak. Both professors, and as
Courtesy: http://www.minutetravelguide.com/what-to-visit-in-paris/
- Devesh Sharma
Pinelis pointed out, 'any high school geometry student', knew that the profile of the Eiffel tower curved inward and not outward. Hence Chouard's equation and theory were completely falsified under Pinelis and Weidman's analysis as well as proofs for support from the MTU faculty. Pinelis himself came up with a theory that supported the creation of the blueprints for the tower. He claimed that Gustave had planned to use the tension of the elements of the structure itself to counter balance the pressure from incoming winds. Weidman tracked down a copy of a statement from Gustave to the French Society of Civil Engineers that was dated March 30th, 1885 and was written in French. With the help of a professional translator, Weidman could finally find the thesis on which Gustave had based his blueprints. It turned out that Pinelis had successfully assumed what Gustave had assumed over a century ago. Gustave had determined the tangents to the skyline profile from various horizontal sections before the final assembly of the tower so that the incoming wind forces could be cancelled out. Gustave had concluded that this would be an appropriate solution as it would reduce the total weight of the monument as well as the surface area exposed to the wind. Using the information derived from the statement, Weidman and Pinelis derived a nonlinear integral differential equation along solutions that yielded the true shape of the Eiffel tower. The French Academy of Sciences and Elsevier journal, Comptes Rendus Mecanique , held the collaborative work of the two professors under the heading Model Equations for the Eiffel Tower Profile: Historical Perspective and New Results.
11
The Infinity Founderâ&#x20AC;&#x2122;s Day, 2011 Issue
Base Rate Fallacy - Nipun Batra 'We never look at the entire picture.' And this exactly is what leads to our greatest errors. Surprisingly, such false reasoning does have a mathematical counterpart, better known as a 'Base-Rate Fallacy'. Despite the fact that I first came across this term while reading a highly complicated scientific philosophy, the Base-Rate Fallacy is simply a logical fallacy and as a concept, extremely easy to understand. The Base Rate Fallacy, also known as base rate neglect or base rate bias is an error that occurs when the conditional probability of an event is assessed without taking into account its 'base rate' or 'prior probability'. It's best to understand this through a relevant example. Let's assume a city of 1 million people in which 100 people are knows to be HIV Positive. The base rate probability of a random citizen having AIDS would be 0.0001 and the base rate probability of a random individual of the city being HIV negative would be 0.9999. In order to prevent the disease from spreading any further, the city council starts free tests at a local city hospital. These tests have two failure rates of 1% each. 1. If the blood of a person who's contracted AIDS is taken, the test would show positive 99% of the time, but it would mistakenly show negative 1% of the time (i.e. it has a false negative rate of 1%) 2. If the blood of a HIV negative person is taken, the test would show negative 99% of the time, but it will mistakenly show positive 1% of the time (i.e. it has a false positive rate of 1%) So, the failure rate of the test is always 1%. Suppose somebody's blood tests positive. What is the chance that they have contracted AIDS? Someone making the 'base rate fallacy' would incorrectly claim that there is a 99% chance that somebody has contracted AIDS, because 'the failure rate' of the blood test is always 1%. Although it seems to make sense, it is actually bad reasoning. The calculation below will show that the chances of him being HIV positive are actually near 1%, not near 99%. The fallacious reasoning arises from confusing two different failure rates. The 'number of HIV negatives per 100 positive blood tests' and the 'number of negative blood tests per 100 HIV positives' are unrelated quantities, and there is no reason one should equal the other. In fact, the two don't even have to be roughly equal.
12
To show that they do not have to be equal, consider a test, which on testing HIV positive blood shows a positive 20% of the time and fails to do so 80% of the time, while when it tests HIV negative blood, it works perfectly fine and shows a negative. If this second test shows a positive, the chance that it failed by showing positive to HIV negative blood is 0%. However if it tests an HIV positive patient, the chance that it fails to show positive is 80%. So, here 'HIV negatives per positive blood test' is 0% but 'negative blood tests per HIV positive patient' is 80%. Now let's go back to our original test, the one with 'positive blood tests per HIV negative patient' of 1% and 'negative blood tests per HIV positive patient' of 1%, and let's compute the 'HIV negatives per positive blood test' rate. Imagine that the blood of the city's entire population of one million people is tested. About 99 of the 100 HIV positives will give a positive blood testâ&#x20AC;&#x201D;and so will about 9,999 of the 999,900 remaining HIV negative population. Therefore, about 10,098 people will show a blood test positive for HIV, among which only about 99 would have actually contracted AIDS. So the probability that a person showing positive for HIV and actually contracting it is only about 99 in 10,098, which is less than 1%, and extremely far below our initial guess of 99%. The base rate fallacy is only fallacious in this example because there are more HIV negatives than positives. If the city had about as many HIV positives as negatives, and the false-positive and false-negative rates were nearly equal, then the probability of misidentification would be about the same as the false-positive rate of the device. These special conditions hold sometimes: for example, about half the women undergoing a pregnancy test are actually pregnant, and some pregnancy tests give about the same rates of false positives and of false negatives. In this case, the probability of getting a false positive per positive test will be nearly equal to the probability of getting a false positive for woman who's not pregnant. This is why it is very easy to fall into this fallacy: it gives the correct answer in many common situations. One would find the base rate fallacy to be widely applicable in the real world especially in tackling problems such as detecting criminals in a mostly lawabiding population. The extremely small proportion of targets makes the base-rate fallacy very much applicable.
Founder’s Day, 2011 Issue
In certain psychological experiments , students were asked to estimate the GPA’s of hypothetical students with statistical information about GPA distributions given to them. However, these students were found to ignore them if given descriptive information about a student would have had no relation to their school performance. Such experizments oppose the requirement of interviews for college admissions because of the claim that basic statistics are better than interviewers at picking successful students. Similar
The Infinity
arguments are also posed by economists who argue that brokers are committing mistakes because the market performance and the performance of any individual stock cannot be distinguished from chance movement which is why professionally picked stock portfolios do no better than the ones picked at random. Thus, Base Rate Fallacy shows how such a small logical error, a little bit of statistics and simple probability can be used to explain or to solve such real-life complications.
Birthday Paradox - Shivam Goyal I once asked a group of friends what was the most important day for them. Some said the Independence Day; some said Holi; and many others took the name of some or the other random day but, rarely did anyone claim it to be their birthday. Isn't one's birthday the most important day? When two people meet each other, they inevitably exchange their names but seldom do they compare their birthdays. However, when they do compare their birthdays, they are often very surprised when their birthdays do match, because of the common belief that it is very unlikely for two people to have the same birthday. What they don't know is the fact that the probability of two people sharing the same birthday turns out to be 50% in a group of 23 people and 99% in a group of 57 people. Majority of the people who hear this are shocked by this revelation. In fact many even go to the extent of claiming that the statement is invalid and that it is not possible under any set of random circumstances. The above phenomenon is called the birthday paradox or the birthday problem. The reason why it is so surprising is that probability of two people in random having the same birthday is 0.27% (1/365). Even if one person compares his birthday to say 20 people, the probability will still turn out to be less than 5%. But when a group of 23 people is taken, then the statistics change and the probability shoots up to 50%. This is primarily due to the fact that each of the 23 people is now comparing his birthday with each of the other 22 people. The probability of each person would be less than 5%, but each subsequent person is trying (22-n) times. Therefore the total no of times two people compare their
birthdays turns out to be a triangular number: 22+21+20+19+……….+2+1=253 Thus, final probability = 253/365 ( more than 50%) The probability can also be calculated much more accurately using the following example. Suppose one has a big calendar on a wall showing 365 days. The first person who walks in would sign against the date of his birthday. The next person who walks in would only have 364 possible open days available to sign, so the probability of birthdays of two people not colliding would be 364/365. Now the third person who walks in would have 363 open days, thus the probability of two birthdays not colliding will be 363/365. Later on, one would learn that the probability of independent events can be calculated by multiplying the probability of each event occurring independently: P(1) × P(2) × P(3) × ... × P(n) Assuming there are in total n people, the generalized formula that one obtains is: (364/365) x (363/365) x … (365-n+1)/365 Above given is the sum total of the probability of possible clashes. According to the laws of probability that we study in our high-school textbooks: P(clashes) = 1 - P(no clashes) From all of the different formulae given above, the probability with 23 people when calculated would come to 0.507 or 50.70%, which is roughly ½.
13
The Infinity Founder’s Day, 2011 Issue
Four Color Theorem Part - II - Udbhav Aggarwal Courtesy: http://people.math.gatech.edu/~thomas/FC/fourcolor.html
The inverse of the four color theorem. They said that in order to prove the Four Color theorem wrong, one has to draw a map that doesn't comply with the conditions provided in the theorem. This map should be able to disprove the four color theorem. Now, the question arises “What should be represented by such a map?”
One fine day, Appel asked Haken “What do we actually have to prove, to prove the Four Color Theorem?”” Restating, the four color theorem: “The minimum colors required to color a map so that no two colors are adjacent to each other are four, provided points are not considered as boundaries.” Now, when we consider the theorem, there are three aspects of vital importance that should be kept in mind while proving it. Firstly, the proof should be universal. Secondly, the proof should be relatively short and therefore easily understood. Thirdly, the proof should fulfill all the concepts mentioned in the theorem. The problem with Appel and Haken's proof was that it was extensive, so extensive that thousands of calculations were required which could have been fulfilled only by a computer. Consequently, when it was presented in front of the Royal Society of London, it was rejected. Most mathematicians believed that proving something as vast as the sky was impossible and fruitless. Once when the Royal Society did accept Appel and Haken's proof, these mathematicians, disciples of strict mathematics, wanted to prove the proof wrong. According to them, usage of a computer in something as pure and natural as mathematics was morally incorrect and a disgrace to the subject itself. It was then that the simplest and yet the most viable technique of proving the four color theorem wrong was established, a theory that was understood by everyone:
14
Simply, the map should have regions which, when adjacent to each other, can be colored in a minimum of five colors so that no two colors are adjacent to each other. On improvising, the map should have five and only five regions so that all of them share their boundaries with one another. Such a map can never fulfill the four color theorem as each color is going to touch the other (as each region touches each other) and therefore the minimum colors required to color the map would be five (each one filling a region) and not four. Is the four color theorem proved wrong by the above statement? No. This answer can be given by anyone and everyone. When we start drawing such a map, we easily draw the first two regions. The third one is also fairly simple. The problem begins from the fourth one. As seen in the diagram, the fourth one (in order to touch the third and the second and the first) has to cover one of the regions either 1 or 2 or 3. In this case it is covering 2. If drawn from the other side, it will cover 1 and so on. Now with such a condition, if a fifth region is to be drawn then it can never ever touch the second region. Therefore, the quest for a map with five regions and all the five touching each other fails. This fact is applicable to every attempt that has been made to prove the four color theorem wrong. Yet, the irony of it all is one cannot even believe one's own eyes. The four color theorem that is proved for everyone by this simple experiment still remains a question! Still remains unproven by hand! The above illustration portrays a successful four color theorem map of the United States of America.
Founder’s Day, 2011 Issue
The Infinity
Conway’s Game of Life - Harshil Agarwal It all originated when the great mathematician John Von Neumann, in 1940s, set out to find a hypothetical machine that could build copies of itself one after the other. In his quest, Neumann succeeded once he found an ideal machine of such a sort. In the machine he found, there were cells represented on a rectangular grid with very complex rules. To Neumann, the machine still looked incomplete and problematic and he left it as a problem that experienced the minds of many a mathematician until the year 1970. In the year 1970, a British Cambridge mathematician, John Horton Conway successfully cracked the problem. Ever since, that solved problem is known to be a cellular automaton invented by Conway and named after him thereafter. Conway's Game of Life is also known simply as Life, and being more complex and non-typical than most of the computer games, it is also defined as an artificial life simulation. Life is not considered as a “game” in a straightforward manner, where people are required to play them. So, this game does not involve any players and there is no victory or loss involved with the game. Life is a 'zero player game', meaning that once the 'pieces' are placed in the starting order, the rules of the game determine everything that happens in it right from the beginning. After the game has been initiated, no further inputs or commands are needed in order to sustain the game. It is an observation-based game; in other words, a simulation exercise. This means that one interacts with this game by providing the pieces with an initial configuration, a fundamental start. Having started it, all the rest is the person's observation on how the game evolves itself. This game became very popular after it was mentioned in an article published by Scientific American in 1970. It is a two dimensional orthogonal grid of square cells. It consists of a collection of cells which are in either of the two states, living or dead. Every cell interacts with its eight neighbors that maybe vertically, horizontally or diagonally adjacent. The Game of Life is full of surprises. Sometimes, it is impossible to look at the starting point and predict what can happen in the future. The only way to find out is to follow the rules of the game, which are as follows: 1) Any cell with less than 2 neighbors dies of loneliness or under-population. 2) Any cell with two or three neighbors lives on to the next generation. 3) Any cell with more than three neighbors dies due to
overcrowding or overpopulation 4) Any dead cell with exactly three neighbors around it becomes alive as if by reproduction The initial pattern put up constitutes the 'seed' of the system. The first generation is created by applying the above rules at the same time to every cell in the seed – birth and deaths of the cells occur simultaneously and the exact moment at which this happens is called a tick. Each generation is like a pure function of the previous function. The same rules are applied to successive generations which ensures the continuity of the game. The rules of the game were chosen after a great deal of deliberations. John Conway tried out many other possibilities before setting up the rules for the game. The rules were laid up in a manner which prevented the quick dying or reproduction of cells. The rules enable the game to be quite unpredictable and it is by observing that we see whether a pattern will die out completely, form a normal pattern with a stabilized population or even grow forever. This is fundamentally why the game is called a cellular automaton. It is a system in which the rules and regulations are applied to the cells in a regular grid. The rules were carefully chosen by John Conway to meet the following criteria: 1) There should not be a population boom. Meaning that there should not be explosive growth of the cells. (This is the reason why the game called 34 Life was rejected.) 2) The initial patterns should have chaotic and unpredictable outcomes. 3) There should be potential for Von Neumann universal constructors 4) The rules should be very simple and easy to understand but should still adhere to the above criteria. This game, while keeping the rules in mind, can be played in more complex patterns. There are rules which allow the game to be played on hexagons arranged in a honeycomb pattern and also where the cells have more than 2 states (imagine live cells with different colors). Life is a very simple example of what is sometimes called 'emergent complexity' or 'self-organizing systems'. The game is like a study, it is an observing game as I stated above. It shows us how easy and simple patterns may have very unpredictable and complex outcomes. It can help us understand as to how the beautiful petals in the rose or how the stripes in a tiger or a zebra emerge from a single tissue growing inside organism. It can also help us develop an understanding of the diversity of human, plant and animal life that has evolved on this Earth.
15
The Infinity Founder’s Day, 2011 Issue
The Never Ending Primes - Shantanu Agarwal Have you ever wondered what the world's longest prime number may be? During the Elizabethan times, the largest known prime number was 524,287. This was unraveled by using the checkerboard method of placing a grain of rice on the first square on the board, then two on the second, then four on the third, and so on. On adding them till a certain square, a prime number was obtained. For example, we obtained 7 after three squares and 31 after four. Till 1867, Swiss mathematician Leonard Eular was the record holder for the longest prime number which was 2,147,483,647 at the time. As said by many, records are made only to be broken; a 9.8 million digit prime number was discovered in 2006. Yes, a 9.8 million digit prime number! By using the checkerboard method, this
would be obtained on the 32,582,657th square. It has also been calculated that it would a take a staggering one and a half months just to read the number with the naked eye! But the biggest shocker is that this was not the accomplishment of a super computer, but the work of an amateur mathematician, Hans Michael Fluenich. In yet another twist of fate, the record was broken yet again at the University of California at Los Angeles by Edson Smith's super computer. The 12,978,189 digit number is presently the undisputed champion, open to any challenge in return for a astounding $200,000. So if you are planning to apply, here is your ticket to fame: 2n-1. As simple as it might seem, equating a number greater than Smith's number can be a really daunting task. Nevertheless, best of luck!
Crossword Across (Googol)
1.] 10 2.] Angle (in radian), between the hour hand, and the minute hand of a clock at 12.33 pm (approx.) 3.] Calculate approximately: ∞
4 + 2 + 1 + 0.5 + 0.25 + 0.125 + … + 4/2 4.] Calculate approximately: 1/∞ 9.] Solve for 'a': 3
a = 5 * 10 + 15 / 20(25) * 16600 / 3
Down 1.] 10
(Googol) 2
x 10
2
(Product of the numbers of a telephone)
2
5.] If (a + b + c ) = 10, Find: 2
2
2
2
2
2
2
2
2
2(a + b - c ) + 2(a - b + c ) + 2(-a + b + c ) 6.] Objects or quantities that display self-similarity, technically, on all scales.
Mirrors 7.] Calculate Approximately: 1/0 8.] _________ Equation : 2
2
2
4
-9768x (yd) – 54yd + 91x
16
Editorial Board
Editor-in-Chief Editor Senior Editors Graphics Editor Associate Editors Correspondents Chief-of-Production Faculty Advisors Special Thanks Photo Credits
Vinayak Bansal Revant Nayar Mohit Gupta, Nipun Batra, Varun Gupta Aviral Gupta Aditya Vikram Gupta, Ujjwal Dahuja, Shivam Goyal Devesh Sharma, Udbhav Agarwal Spandan Gopal Agrawal Anjan Chaudhury, Chandan Singh Ghughtyal Purnima Dutta, Arnab Mukherjee Ashutosh Goyal