The Scientific Harrovian - Issue 5-1, Jan 2020

Page 1

SCIENTIFIC HARROVIAN

ISSUE

V-i

2019-20

HARROW INTERNATIONAL SCHOOL HONG KONG

‘SEEING SCIENCE’

BIOLUMINESCENCE: THE MYSTERY BEHIND THE LIGHT VISIONARIES FOR VISION T H E TA L E O F :

T H E G O L D E N R AT I O , FIBONACCI NUMBERS & LUCAS NUMBERS ROBOTICS & ARTIFICIAL INTELLIGENCE:

HOW FA R CA N A N D SHOU LD A I TA K E U S ?

1


WELCOME! Science covers a multitude of disciplines, and here is where anyone and everyone with a passion for writing, for science, and for exploration can convene. Edition V-i of the Scientific Harrovian encompasses topics covering three of the four major branches of Science - Mathematics and Logic (which IS considered a science!), Biological Science, and Physical Science. The universe has so many secrets that lie undiscovered, and through writing, editing, and illustrating, these students have spread knowledge just a bit further, which is the best thing that anyone could ask for in our never-ending search for truth. I am so proud of every single contributor for the time and effort they have put into making this edition a reality, something substantial that will remain with them into the future. This term's theme is 'Seeing Science', which allowed authors to explore a multitude of interpretations and research a specific focus area, along with other articles concerning applications of Science in our modern world. The theme this year will be 'Science and the Senses', exploring how humans perceive the world and offering a glimpse into the lack of objective truth. I hope you enjoy the journey that this edition takes you on! Yours sincerely, Stephenie Chen Y12, Gellhorn, Editor-In-Chief

MESSAGE FROM MRS CLIFFE

At the very start of the year, a pupil approached me to ask if we could publish the Scientific Harrovian more regularly instead of just once a year‌ she felt that such a valuable opportunity for pupils to research scientific ideas, develop their writing skills and enthuse other pupils about the world of Science, ought to be available more often. After some discussion we came up with an exciting new development for the Scientific Harrovian: approximately each term we will publish a shorter electronic edition, and these will be combined into a bumper printed volume at the end of the Summer Term. I am therefore delighted to present to you this edition of the Scientific Harrovian, which has been completely coordinated and managed by Stephenie Chen (Year 12, Gellhorn). Last year Stephenie was awarded an Academic Scholarship, with a focus on STEM, and she has certainly demonstrated her passion and leadership in this area. As well as taking a proactive lead with the Scientific Harrovian and Scholars’ events, she has been involved in several other ventures including Robotics competitions, THIMUN, the Student Council and an English teaching trip to Shanxi over the summer. My vision for Scholars is that they will make the most of opportunities to not only improve themselves, but also benefit the community around them. In only the first term, there have been more than 15 pupils involved in writing, editing, designing and illustrating, ranging from Year 6 to Year 13. I am very proud of what all the pupils have achieved, and I look forward to the Scientific Harrovian being an opportunity for pupils to fly the flag for Science and pupil leadership at Harrow Hong Kong well into the future.

Mrs E. Cliffe

Head of Scholars, Assistant House Mistress & Teacher of Chemistry 2


CONTRIBUTORS

Joy Chen

Year 9, Gellhorn Editor

Michelle Yeung Year 12, Keller Editor

Jett Li

COVER PHOTOGRAPH: Luca Micheli on Unsplash

Ryan Kong

Amber Liu

Year 12, Peel Editor

Audrey Yuen

Year 10, Anderson Editor

Year 12, Gellhorn Editor

Mike Tsoi

Pierce Duffy

Year 13, Peel Editor

Helen Ng

Jarra Sisowath

Year 12, Keller Editor

Year 13, Sun Author

Year 12, Peel Author

Year 9, Gellhorn Author

Year 12, Churchill Author

Josiah Wu

Hoi Kiu Wong

Joaquin Sabherwal

Edward Wei

Callum Begbie

Kayan Tam

Year 6, Shackleton Author

Year 10, Peel Author

Year 6, Darwin Illustrator

3

Year 12, Wu Author

Year 12, Wu Illustrator


CONTENTS 'SEEING SCIENCE' Visionaries For Vision ......................................................................................................................... 6 Joaquin Sabherwal Bioluminescence: The Mystery Behind the Light ............................................................................... 12 Hoikiu Wong Blackbody Radiation and Planck's Constant ....................................................................................... 25 Edward Wei

APPLICATION OF SCIENCE Robotics and Artificial Intelligence: How Far Can and Should AI Take Us? ..................................... 31 Jett Li Bitcoin Explained ................................................................................................................................ 41 Josiah Wu The Tale of: The Golden Ratio, Fibonacci Numbers, and Lucas Numbers ......................................... 45 Helen Ng The Atom Through Time ..................................................................................................................... 55 Pierce Duffy

ABOUT THE SCIENTIFIC HARROVIAN

The Scientific Harrovian is the Science Department magazine, which allows scientific articles of high standard to be published. In addition, the Scientific Harrovian is a platform for students to showcase their research and writing talents, and for more experienced pupils to guide authors and to develop skills to help them prepare for life in higher education and beyond. Guidelines for all articles All articles must be your own work and must not contain academic falsities. The articles must be factually correct to the best of your knowledge. The article must be concise, with good English, and it must be structured appropriately. Any claim that is not your personal findings must be referenced. Copyright Notice Copyright Š 2019 by The Scientific Harrovian. All rights reserved. No part of this book or any portion thereof may be reproduced or used in any manner whatsoever without the express written permission of the publisher, except for the use of brief quotations in a book review. Joining the Team Is there something scientific that excites you and that you’d like to share with others? Will you commit to mentoring budding Science writers? Do you have graphic design skills? Our team may have just the spot for you. Email the current Staff or Student Editor-in-Chief to apply for a position or for further information. 4


SECTION 1

SEEING SCIENCE 5

Photo by Anton Darius on Unsplash


Visionaries for Vision Joaquin Sabherwal (Year 6, Shackleton)

INTRODUCTION We would all like to be young and healthy forever. However, as people inevitably age, their hair will begin to decolour, their joints will become stiffer, and their skin will begin to wrinkle. The wear and tear of our bodies is unavoidable as we get older, but perhaps one of the worst aspects of the ageing process is the possibility of losing one’s sight.

HOW DOES THE EYE WORK? Light reflects off objects and into our eyes. These rays of light enter the eye via the cornea, the pupil and the lens, and then focus on the retina. The retina is a light-sensitive tissue at the back of the eye, which sends signals through the optic nerve to the brain. It is these signals that become the images that we see [1].

Figure 1: Light reflects off objects and into our eyes [1][2] The macula is the area of the retina that gives us our central vision, and the peripheral retina surrounds the macula and gives us our side vision. On the retina, there are about 130 million tiny cells called photoreceptors [2]. These rod- and cone-shaped cells are input cells; they turn light into electrical currents that travel from the output cells, through the optic nerve and to the brain. When light enters the eye, it strikes these cells, which triggers a chemical reaction that sends signals to the brain [3].

Figure 2: The Retina and Photoreceptors [3]

6


MACULAR DEGENERATION AND RETINITIS PIGMENTOSA An overwhelming number of people in the world have a wide range of conditions that make them visually impaired, but the main cause of age-related vision loss is macular degeneration. It is a disease that gradually deteriorates the central vision (macula), leaving a blur or a black hole in the focal point of your view. Retinitis pigmentosa is another common disease that damages the photoreceptors. This disease isn’t restricted to the elderly, as it is genetically inherited and affects 1 in 4,000 people in the USA, regardless of age. Both macular degeneration and retinitis pigmentosa attack the photoreceptors and in the most advanced forms of these diseases, everyday ordinary tasks (like reading, driving, writing, or even just walking around safely) would be almost impossible without assistance. Although the damage to the photoreceptors caused by these diseases is serious, the remainder of the retina’s neurons and cells that transfer electrical signals are usually still intact, which means there is hope to cure vision impairment. If scientists are able to find a way to invent something that can imitate the function of photoreceptors, then the signals would be able to be transmitted to the brain in the way that they’re supposed to. Thankfully, that is happening as technology is advancing.

THE INNOVATIONS SO FAR Researchers have been attempting to invent prosthetic devices that can mimic and fulfil the purpose of the rod- and cone-shaped cells so that blind people might get their vision back. By implanting a tiny device that interacts with the tissue that makes up the retina, scientists have found a way to take the information captured by a camera lens embedded in a pair of glasses that the patient wears, and convert it into electrical signals that get transferred to the brain via the optic nerve. However, most prosthetic devices only provide limited vision, such as bright lights and high-contrast edges. Argus II, which is a device created by a company called Second Sight, helps patients distinguish patterns of light and identify outlines of basic shapes and movement, therefore helping them to navigate the world more independently. Once the Argus II is implanted at the back of the eye by an experienced retinal surgeon, it is accompanied by external glasses with a built-in camera and a small portable pocket computer, which is a vision processing unit (VPU). The VPU processes what the camera is seeing into instructions that then get sent to the glasses, and the antenna transmits the information wirelessly to the implant in the eye. The implant consists of another antenna (which receives the information) and an array of electrodes. When the information is received, the electrode array fires small pulses of electricity which are sent along the optic nerve to the brain, and the patient learns to interpret patterns of light. Argus II has already been commercialised and is available for patients [4][5].

Figure 3: Argus II [5]

7


CRACKING THE CODE One of the lead scientists working on restoring sight to the visually impaired is an American neuroscientist called Sheila Nirenberg. A neuroscientist is someone who studies the nervous system, including the brain, the spinal cord and nerve cells throughout the body. Her main focus is to decipher and learn the language that the brain understands. To understand the importance of this language, it is essential to examine how the information is actually processed when the brain receives an image.

Figure 4: Healthy Retina [6] Figure 4, from left to right: when an image enters the eye, it lands on the front-end cells of the retina, the photoreceptors, then it is processed through the retinal circuitry, which extracts information. That information is next converted into code in the form of electrical pulses then travels to the brain. Figure 5 demonstrates how the electrical pulses are sent in a specific pattern that tell the brain what the image is.

Figure 5: Healthy retina sending electrical pulse codes [6] It’s a very complicated process, as every millisecond, these patterns of pulses are constantly changing, along with the world around you. When the front-cells of the retina shut down due to a degenerative disease like macular degeneration, the retinal circuitry is next to shut down, and although the output cells are left intact, they are no longer transmitting any code. Prosthetic devices, or bionic eyes, such as Argus II, are most certainly innovative, however they are limited to seeing images that are simple. The device allows the patient to see spots of light and basic shapes, but they are far from providing patients with normal representations of faces, buildings, landscapes, etc. The issue lies in the way that the stimulators are producing code. Sheila Nirenberg suggests that to make the image clearer, there is a need to drive the stimulators to produce normal retinal output. She says, “having the code is only half the story. The other part is having some way to communicate that code into the cells so that they can send it to the brain.�

8


Nirenberg is currently working on a unique prosthetic system that is made up of an encoder and a transducer. Like the Argus II, Nirenberg’s device also relies on a camera that is embedded in a pair of glasses and the information captured by the camera is sent to the device wirelessly. However, this system does not send electrical pulses directly along the optic nerve. Instead the encoder converts the information into a code that closely matches the code that a healthy retina uses to communicate with the brain and a transducer drives the output cells in the eye (ganglion cells) to fire electrical signals as the code specifies. Essentially, the image goes through a set of equations which mimics the retinal circuitry, and then it comes out as electrical pulses which then travel along the optic nerve.

Figure 6: A damaged retina, with device replacing the photoreceptors [6] What also makes this different is the pattern of the electrical signals that are fired out. In order to develop this prosthetic system, Nirenberg and her team have had to delve into neuroscience to study what this ‘code’ (pattern of electrical signals) looks like. Trying to understand this code is like learning a new language. The more that they are able to learn how the brain understands this language, the more they are able to mimic the language and therefore communicate with the brain as a normal retina would. This way, they are literally “cracking the code” and finding ways to get the firing patterns to match the activity of normal retinal output, and thus are getting closer to giving the patient an accurate representation of the image in front of them [6][7][8].

CONCLUSION There is hope for visually impaired people in the future! As technology advances, the understanding of how the body works is deepening, and we will soon be able to create bionic versions of our body parts. Sheila Nirenberg suggests that with her device, there is a potential to enhance it using ultraviolet or infrared light so it could even make blind people have better sight than regular people. This means that not only would people have their sight restored, they would have even better eyesight as their bionic eye would have abilities that a natural eye wouldn’t. This innovation might even further our understanding of the way that the brain relates to our body parts, and there may be ways to use the same principles discovered here to cure people with other problems. The potential power of being able to communicate with the brain means that the same strategy could be used for the auditory system and the motor system to help people who have auditory issues or motor disorders. By jumping over damaged circuitry in the same way as they have done with the retinal circuitry, scientists could cure numerous other physical impairments - these visionaries for vision could potentially change the lives of more than just the visually impaired! [6]

9


BIBLIOGRAPHY [1] EyeSmart — American Academy of Ophthalmology, “How the Eye Works” https://www.youtube. com/watch?v=8e_8eIzOFug – 18/03/2018 [2] EyeSmart — American Academy of Ophthalmology, “How the Eye Works and the Retina” https:// www.youtube.com/watch?v=Sqr6LKIR2b8 – 30/11/2010 [3] Science Art, “How Retina Works Animation-Physiology of the Eye Videos” https://www.youtube.com/watch?v=GkJrQmVRkYM -24/01/2019 [4] Wei-Haas, M., (2017, Oct 19) “Could This Bionic Vision System Help Restore Sight?” https://www. smithsonianmag.com/innovation/could-bionic-vision-system-help-restore-sight-180965305/ [5] SecondSight, “Discover Argus II” https://www.secondsight.com/discover-argus/ - 2019 [6] Nirenberg, S., (2013 Jun 26) “TED-Ed - A prosthetic eye to treat blindness - Sheila Nirenberg” https://www.youtube.com/watch?v=RR08NcoBlms [7] Nirenberg, S., (2018, Nov 30) “Cracking The Code To Treat Blindness | Mach | NBC News” https:// www.youtube.com/watch?v=76cWyxzX7ds [8] Nirenberg, S., & Pandarinath, C., (2012, May 7) “Retinal prosthetic strategy with the capacity to restore normal vision” https://physiology.med.cornell.edu/faculty/nirenberg/lab/papers/PNAS-2012Nirenberg-1207035109.pdf

Azvolinsky, A., (2018, Apr 10) “Vision Restored: The Latest Technologies to Improve Sight”, https:// www.the-scientist.com/news-opinion/vision-restored-the-latest-technologies-to-improve-sight-30104 Barker, P., (2018, Oct 1) “5 inventions bringing sight to the visually impaired” https://www.redbull.com/ int-en/inventions-to-help-visually-impaired-people Boseley, S., ( 2018, Mar 19) “Doctors hope for blindness cure after restoring patients’ sight” https:// www.theguardian.com/society/2018/mar/19/doctors-hope-for-blindness-cure-after-restoring-patientssight Gallagher, J., (2012, May 14) “Light-powered bionic eye invented to help restore sight” https://www. bbc.com/news/health-18061174 Jeffries, A (2016, Apr 5) “The Technology That Could Make Blind People See Again” https://youtu.be/ SJUWPD62MTI Loeffler, J., (2018, Dec 28) “5 Medical Innovations That May Help Cure Blindness:https:// interestingengineering.com/5-medical-innovations-that-may-help-cure-blindness McDougall, B., (2018, Jun 8) “Australian world-first bionic eye invention ready for sight” https://www. kidsnews.com.au/technology/australian-worldfirst-bionic-eye-invention-ready-for-sight/news-story/93 146b354bdb2b25331664e7300f53c2

10


'BIOLUMINESCENCE' Kayan Tam (Year 12, Wu)

11


Bioluminescence: The Mystery Behind the Light Hoi Kiu Wong (Year 12, Wu)

INTRODUCTION Have you ever taken a stroll along the beach and had something intriguing catch your eye? A microscopic creature, glowing in the pitch-black sea? Unfortunately, I have only seen this ‘scene’ in documentaries; but that was already enough to grab my attention. As I have researched bioluminescence, I have discovered how essential it is in the lives of millions of different species - from insects like fireflies to jellyfish, the list seems endless. But the question remains: what is the science behind it?

THE UNIQUENESS OF BIOLUMINESCENCE Bioluminescence is defined as ‘the emission of light from living organisms (such as fireflies, dinoflagellates, and bacteria) as the result of internal, typically oxidative chemical reactions’ [1]. The light produced by the result of these ‘chemical reactions’ is what sets bioluminescence apart from other natural optical phenomena, such as fluorescence and phosphorescence. Fluorescent molecules, unlike bioluminescent molecules, do not produce their own light; they absorb photons, and in turn excite electrons to a higher energy state. As these electrons relax to their ground state, they re-emit their energy at a longer wavelength. This excitation and relaxation happens very quickly, therefore fluorescent light is only seen while the specimen is being illuminated [2] (Figure 1). Similarly, phosphorescence also re-emits light; however it does so over a longer time scale, rather than immediately, and continues after excitation occurs [3]. It is important to understand the distinction between fluorescence and bioluminescence, because there is often confusion between them due to the fact that in some organisms, bioluminescent energy is used to cause fluorescence [4].

Figure 1. Difference between the bioluminescence and fluorescence. In bioluminescence, light is a by-product of an oxidation reaction (Source: Thermofisher Scientific [11]) 12


DISTRIBUTION OF BIOLUMINESCENCE An apparent peculiarity of bioluminescence is that there is no obvious rule or reason in the distribution of luminous species among microbes, protists, plants and animals. Harvey (1940, 1952) expressed it in this way: “It is as if the various groups had been written on a blackboard and a handful of damp sand cast over the names. Where each grain of sand strikes, a luminous species appears. It is an extraordinary fact that one species in a genus may be luminous and another closely allied species contains no trace of luminosity.� Putting this into context, the phyla Cnidaria, which includes jellyfish, and Ctenophora (commonly known as 'comb jellies') have received the most sand: many members of the former phylum and nearly all of the latter are luminous, whereas there are certain phyla that contain no luminous organisms. There are also some cases in closely related genera of the same family where one genus is luminous while the others are not. Another peculiarity of bioluminescence is that more bioluminescent organisms are marine creatures rather than terrestrial or freshwater inhabitants. There are very few non-marine organisms that are bioluminescent, hence they can easily be listed here: fireflies and beetles, earthworms, millipede Luminodesmus, limpet Latia, snail Quantula, the glow worms Arachnocampa and Orfelia, and luminous mushrooms [12].

CHEMISTRY OF BIOLUMINESCENCE

Figure 2: Luminous Fish of the Deep Sea Drawing by Holder, Charles Frederick (1892) Along the Florida Reef (Source: Wikimedia Commons) Bioluminescence has captured the interest of mankind ever since ancient times - descriptions of light emitted from fireflies can be found in folklore, songs and in numerous publications of literature (Figure 2); there is therefore no doubt that studies on Bioluminescence had already began in the early 17th century. However, its chemical study only originates from the early 20th century, when human research and technology began to become progressively more advanced [5]. LUCIFERIN-LUCIFERASE REACTION Bioluminescence can be produced by the oxidation of the molecule luciferin (Figure 3), which is the 'oxidizable substrate' [8], and the rate of the reaction between luciferin and oxygen can be controlled by a catalysing enzyme, either a luciferase or a photoprotein [2] such as aequorin [7]. The luciferin-luciferase reaction was first demonstrated by Dubois in 1885 when he made two aqueous extracts from the luminous West Indies beetle Pyrophorus. One of the extracts was prepared by crushing the light organs of the 13


beetle in cold water, resulting in a luminous suspension. The luminescence gradually weakened and finally disappeared. The other extract was prepared in the same way but with hot water before being cooled, and the use of hot water immediately put out the light. However, the two extracts produced light when mixed together. Dubois then repeated the experiment with the extracts of the clam Pholas dactylus and received similar results. Therefore, he concluded that the cold water extract contained a specific, heat labile(1) enzyme which is necessary for the light-emitting reaction, and introduced the term ‘luciferase’ for this enzyme. He also concluded that the hot water extract contained a specific, relatively heat stable substance called ‘luciférine’ (now spelled luciferin). Hence, the luciferin-luciferase reaction is an enzyme-substrate reaction that emits light [12].

Figure 3: D-luciferin - the luciferin found in fireflies (Source: Wikipedia) Another person who made a significant contribution to the study of bioluminescence was E. Newton Harvey (1887-1959). In 1917, Harvey conducted experiments on bioluminescence but found that the light observed was weak and ‘short-lasting’. Following up in 1947, McElroy found that the light-emitting reaction requires ATP (adenosine triphosphate) as a cofactor(2), which was discovered in the firefly system [12]. Adding ATP to the mixtures of luciferin and luciferase resulted in a bright, long-lasting light. In fact, this was not a simple experiment at the time as ATP was not commercially available, thus this discovery was a huge breakthrough for the chemical study behind bioluminescence [5]. In 1949, McElroy and Strehler further found that luminescence reactions require another ion or cofactor Mg2+ (or Ca2+) in addition to luciferin, luciferase and ATP [5], which in turn causes a conformational change in the photoprotein, thus giving the organism a way to precisely control light emission [2]. There is a large chemical variety of luciferins as they have derived from many evolutionary lineages [2], thus all of these bioluminescence reactions vary, except that they all require oxygen at some point. Harvey stated in his 1952 book “It is probable that the luciferin or luciferase from a species in one group may be quite different chemically from that in another” after he discovered that the luciferin of the clam Pholas differs from that of the ostracod Cypridina (Harvey, 1920) [12]. However, this belief did not last long as it was discovered that a chemically identical luciferin can appear in unrelated organisms. For instance, around 1960, a luciferin identical to the luciferin of Cypridina was discovered in the luminous fishes Parapriacanthus and Apogon. Moreover, in the 1970s, it was discovered that coelenterazine, a luciferin, is the light emitter in many other groups of organisms: protozoans, jellyfish, crustaceans, molluscs, arrow worms and vertebrates. The likely explanation for this is that luciferin is acquired exogenously(3) through the diet and they are relatively easy to obtain as they are present in both luminous and non-luminous marine animals. However, the complete biosynthesis pathway is still not completely understood for any marine luciferins, so their ultimate origins are still unknown [2]. Furthermore, delving deeper into the aspect of its chemical structure, some investigators found that highly purified extracts of luciferin contains a -COCH2OH side chain, which is oxidatively degraded to -COOH in the luminescent region. Hence, Cypridina luciferin is readily oxidized by many oxidizing agents, but produces light only when oxidized in the presence of luciferase [8]. It is still not known how many types of luciferin there are, but those that are better-studied are D-luciferin (found in fireflies), coelenterazine (most widely used luciferin in the sea) and Vargulin/Cypridina luciferin [2] (Figure 4). 14


Figure 4: The different types of luciferins used by marine organisms Shown are the molecular structure of the specific luciferins, mode of operation, and taxonomic groups known to use them. In the last column, taxa containing unique, characterized luciferins are listed above the dashed line, whereas those that are unknown or poorly understood are below the dashed line [2] PHOTOPROTEINS It was widely believed that bioluminescence was only derived from luciferin-luciferase reactions until the discovery of the photoprotein aequorin in the jellyfish Aequora in 1962. Aequorin emits light in aqueous solutions by the simple addition of Ca2+, regardless of the presence or absence of oxygen (Figure 5). The luminescence is emitted by an intramolecular reaction of the protein, the total light emission being proportional to the amount of the protein used. The definitions of luciferin or luciferase did not match up with the properties shown by aequorin, thus making it an exception at the time. However, in 1966, another bioluminescent protein in the parchment tube worm Chaetopterus was discovered, this one emitting light when a peroxide and Fe2+ are added; and as with aequorin, it was found that the total light emission was proportional to the amount of the protein used. Hence, a new term ‘photoprotein’ was introduced for these unique bioluminescent proteins (Shimomura and Johnson, 1966). 15


Figure 5: Aequorin gives off light in the presence of Ca2+. Coelenterazine and apoaequorin are the components of aequorin (Source: Royal Society of Chemistry) The aequorin photoprotein is an enzyme-substrate complex that is more stable than its dissociated components of enzyme and substrate. Due to its greater stability, the photoprotein complex occurs as the primary luminescent component in the light organs of luminous organisms instead of its dissociated components. In the light organs of Aequorea, the complex aequorin is highly stable when Ca2+ is absent, but its less stable separate components of apoaequorin (enzyme) and coelenterazine (substrate) are hardly detectable in the jellyfish [12].

BIOLUMINESCENT ORGANISMS Bioluminescence spans in a range of ecosystems; the most comprehensive list of bioluminescent genera, assembled by Herring (1987) and Harvey, reports that of the seventeen phyla in the animal kingdom, at least eleven contain luminous forms [8]. This property of luminescence may be emitted by marine animals either by their luminescent organs or the bacteria on their bodies. Previously, there were many speculations that this light is produced by bacteria; however, modern scientists and researchers have proven that the bacteria emit light only after they have developed exponentially on dead fish and other organisms. Therefore, often, bioluminescence in the sea is usually due to large numbers of jellyfish, snails, fish, dinoflagellates and many other species [8].

Figure 6: Bioluminescence of dinoflagellate N. scintillans in the yacht port of Zeebrugge, Belgium By © Hans Hillewaert, CC BY-SA 4.0 https://commons.wikimedia.org/w/index.php?curid=10711494 Alongside fireflies, dinoflagellates are one of the most commonly encountered bioluminescent organisms. They are microscopic in size (range from about 30 μm to 1mm) and are often found in coastal regions [18], where a large number of them create specks of light seen in the water. Therefore, they create red tides, the phenomena in which the water is discoloured due to the high abundance of them (the dinoflagellates) rapidly growing and accumulating. At night,‘bioluminescent bays’ are the result of the bioluminescent dinoflagellates creating a beautiful ‘sparkle’ along the beach and they have become famous tourist destinations in Puerto Rico and Jamaica [2]. 16


FUNCTIONS OF BIOLUMINESCENCE During the day, sunlight filters down into the ocean water, increasing the visibility. This means that many marine creatures would be much more vulnerable to predators as there are no hiding places at all in the shallower water, hence many would swim deeper into the ocean where less visible light reaches them. Therefore, this results in massive animal migration patterns in the planet’s oceans - animals vertically migrate upwards during the night when it is dark so they can look for food, and vertically migrate downwards when there is daylight in order to hide. As a consequence of this migration, most of these creatures spend a lot of time in dim light or in total darkness. Thus, bioluminescence helps them with their survival in different ways [10]. To attract or locate prey Bioluminescence can be used by marine organisms to attract prey. An example of this is that some fish have red-emitting light organs located under their eyes, and the unusual long-wavelength sensitivity of the fish eye suggests that their red luminescence may be used to illuminate prey that are blind to red light, thus helping them hunt down prey [2]. The Angler fish is another example of a marine organism using bioluminescence to attract prey - many species of them have a small glowing bulb known as the esca (the “lure”) dangling in front of their mouth which contains tiny luminescent bacteria called photobacterium, hence they are able to attract prey and attack it. This creates a ‘win-win situation’ for both the anglerfish and the bacteria because the anglerfish is able to lure in prey and in exchange, the bacteria gains protection and nutrients from the fish as their host [15]. To act as a defense against predators This is one of the most common functions of bioluminescence in the sea. Many marine creatures such as crustaceans, squid, jellyfish and fish release their light-emitting chemicals into the water, producing clouds or particles of light that distracts or blinds the predator (known as a ‘smokescreen’). Some even squirt their predators with luminescent slime, making them easy targets for secondary predators. Another way that these organisms use their bioluminescence when attacked is to lure in the secondary predators, thus giving them an opportunity to escape, as the first attacker would try to escape as well. Some organisms also use their bioluminescence as a warning to predators, signalling the unpalatability(4) of the prey [9]. Camouflage Counterillumination (Figure 6) is a process in which some marine organisms such as the hatchet fish or the poryfish use to camouflage themselves. The silhouette of the opaque animal is replaced with bioluminescence of a similar colour to the background of the ocean, blending in with the light filtering from the sky. This is most common amongst fishes, crustaceans, and squid that inhabit the twilight zone of the ocean where many predators have upward-looking eyes adapted to locate silhouettes of prey [10].

Figure 6: Counterillumination (Source: Smithsonian Institution)

Keeping the school together Luminescent shrimp, squid and fish form schools, and many of them show vertical migration during the day and night. Thus, the luminous flashes from these marine organisms help keep the school together, as the light they emit can be detected over large distances. Dennell (1955) believed that the light of bathypelagic decapod crustaceans could be seen at distances up to 100m. Moreover, it is likely that luminescence is subject to diurnal rhythmicity and that the members of a school may mutually stimulate 17


each other i.e. a luminescent euphausiid or a myctophid flashes light, and other individuals of the species may flash in turn as they are stimulated by the luminescence. Hence, it is widely believed that the light helps with regulating the degree of rising and sinking that the school needs to execute. Kampa & Boden (1957) found that flashing became most frequent during twilight migration, and the mean intensity, 1 Ă— 10-4 ÎźW/cm2, equalled that of the light-level with which the migration of the animals was associated [9]. Courtship Fireflies are well known for their bioluminescence during the warm summer months as they use their light to attract members of the opposite sex [13]. This is done through means of species-specific spatial or temporal patterns of light emission. [9] This can also be seen in marine life; for instance, the male Caribbean ostracod, a tiny crustacean, uses bioluminescent signals on its upper lips to attract females; Syllid fireworms are inhabitants of the seafloor, but when there is a full moon, they move to the open water where the females of some species, like Odontosyllis enopla, use bioluminescence to attract males while moving around in circles [17]. Some other functions of Bioluminescence are included in Figure 7.

Figure 7: Functions of bioluminescence; used for defence (blue), offence (magenta), and intraspecific communication (gray). Some animals are use their luminescence in 2,3 or even 4 different roles [2] 18


Luminous bacteria in the sea Luminous bacteria form specific symbioses with some marine fish and squid, and this creates a ‘winwin situation’ for both the bacteria and the marine organism. This is because the bacteria provides the host with light that can be used to attract prey and to find a mate, while the host provides the bacteria with an ideal growth environment. For free-living bacteria where the adaptive value is less evident, the most generally accepted hypothesis is that the luminous bacteria growing on fecal pellets may serve as an attractant, causing the pellets to be consumed and in turn introducing the bacteria to an animal’s nutritious stomach and intestine [10].

APPLICATIONS OF BIOLUMINESCENCE ANALYSIS Bioluminescence does not only help marine animals or fireflies, they are also used as analytical tools in various fields of science and technology. For example, firefly bioluminescence is used as a method of measuring ATP (vital for living cells) [12]. This is done by adding a known amount of luciferin and luciferase to a blood or tissue sample, where the cofactor concentrations may be determined from the intensity of the light emitted [13]. Ca2+-sensitive photoproteins e.g. aequorin from a jellyfish, can also be used in monitoring the intracellular Ca2+ that regulates many important biological processes, and certain analogues of Cypridina luciferin are utilized as probes for measuring superoxide anion, an important but rare substance in biological systems. Furthermore, the green fluorescent protein (GFP), which was discovered alongside aequorin, is used as a highly useful marker protein in biomedical research [12]. In methods similar to that of measuring ATP, scientists have also used bioluminescent reactions to quantify other specific molecules that are not involved in a bioluminescence reaction. They do this by attaching luciferase to antibodies, and the antibody-luciferase complex is then added to a sample where it binds to the molecule to be quantified. Following washing to remove unbound antibodies, the molecule of interest can be quantified indirectly by adding luciferin and measuring the light emitted. Methods used to quantify certain compounds in biological samples such as the ones described here are called assays [13].

Figure 8: Principle of simple luciferase reporter assay. In fact, luciferases are good reporter enzymes in the field of bioresearch. They are widely used in various aspects of biological functions, such as gene expression, post-translational modifications, and proteinprotein interaction in cell based assays (Figure 8) [16]. This means that luciferases are used to study how individual genes are activated for protein expression or repressed to stop producing protein. The gene promoter is the genomic DNA sequence immediately upstream of the transcription start site [14], and this specific gene promoter can be attached to the DNA that codes for firefly luciferase and introduced into an organism. The activity of the gene promoter can then be studied by measuring the bioluminescence produced in the luciferase reaction. Thus, the luciferase gene can be used to “report” the activity of a promoter for another gene [13]. This is also known as quantitative visualization of gene expression. 19


BIOLUMINESCENT IMAGING Using bioluminescent reporters is one of the most sensitive methods to visualise molecules, and it has the most cost-effective and simplest procedure. Bioluminescence imaging using luciferase reporters does not need exogenous light illumination, and the luminescence reaction is quantitative. For example, in vitro bioluminescence imaging may use a reporter plasmid vector which includes a promoter sequence, and organelle-targeting luciferase gene sequence. The plasmid is transfected into the target cells and the promoter region proceeds to regulate the expression of luciferase gene. Firefly luciferin added to the medium is oxidised by the expressed firefly luciferase to produce luminescence. The light signal can therefore show the locality or mobility of organelles in living cells. This is measured by special equipment using a CCD photon imaging system (Figure 9) [16].

Figure 9: In vitro bioluminescence imaging for organelles in living cells (Source: [16]) In vivo bioluminescence imaging is most commonly used for cell tracking. Luciferase-expressing cancer cells, immune cells, stem cells, or other types of cells can be imaged in small animals. For example, a reporter plasmid vector consisting of the promoter sequence, a luciferase gene sequence and antibiotic resistance sequence is transfected into the target cancer cells. The promoter region regulates the expression of the luciferase gene in these cancer cells, and luciferase expressed stable cells are then transplanted into a mouse. After allowing time for cancer cell growth, luciferin is injected into the body. Most commonly in these imaging experiments, firefly luciferin enters into the cells through the blood, where it is oxidised by the expressed firefly luciferase to produce light. The light signals help show the location and size of cancer cells in the body, hence providing information about the number and spatial distribution of the cells in the animal [16].

Figure 10: in vivo bioluminescence imaging using luminescent living cells [16]

20


OTHER APPLICATIONS Furthermore, bioluminescence can be used to test for water purity. This can be done by placing genetically modified microorganisms into the water and their degree of luminescence can be used to identify certain toxins in the solvent. Many scientists have studied this and have found that it is particularly effective in determining the presence of arsenic (a common water contaminant) and oil hydrocarbons [19]. Moreover, bioluminescence can be applied in daily life; for instance, Portuguese fishermen have made use of the luminescent secretion of Malacocephalus for illuminating their bait, and their success gives hope that artificial luminous lures could work in the sea [9]. IN THE FUTURE Bioluminescent technology is still developing and scientists are trying to find innovative ways of using bioluminescence reactions. An idea for the future is that instead of using electricity, we could use bioluminescence to provide energy for our light sources. Bioluminescent algae would be stored in a long glass tube in salted water (creating a small ecosystem) and it would be able to lighten the surroundings. Also, many researchers are developing methods to create bioluminescent trees to line streets. This would effectively eliminate the need to place more expensive electrical lamps. Although the biggest challenge now is increasing bioluminescent brightness to provide enough light for drivers, I believe that these ‘glowing’ trees planted on the side of roads will become part of the norm within a few decades [19].

CONCLUSION To conclude this article, I must admit that bioluminescence is a very complicated phenomenon, especially since it requires studies spanning morphology, cell biology, physiology, spectroscopy, organic chemistry, biochemistry and genetics. Therefore, the feeling of confusion is acceptable when reading about bioluminescence; many parts of this phenomenon are still unknown to mankind, thus further research into the topic is required. Although it is hard to understand completely, bioluminescence is undoubtedly one of the most beautiful mysteries in nature that the world has ever seen. Not only has it helped numerous organisms with their survival, but it has already been helping us with advancing science and technology, even having the potential to help save lives. Therefore, I believe that bioluminescence can assist mankind in innovation, and help shape our future significantly.

GLOSSARY

(1) Heat labile: affected by heat, a heat-labile enzyme is denatured by heat (2) Cofactor: mostly metal ions or coenzymes, are inorganic and organic chemicals that assist enzymes during the catalysis of reactions (3) Exogenously: originating from outside an organism (4) Unpalatability: distastefulness 21


BIBLIOGRAPHY [1] “Bioluminescence.” bioluminescence

Merriam-Webster,

Merriam-Webster,

www.merriam-webster.com/dictionary/

[2]“Bioluminescence in the Sea.” Annual Reviews, www.annualreviews.org/doi/full/10.1146/annurevmarine-120308-081028. (website unavailable) Haddock, Steven H.D., et al. “Bioluminescence in the Sea” . pdfs.semanticscholar.org/ ae51/4348866380fa87daf2fdfa72b81c673fd391.pdf [3] “Fluorescence, Phosphorescence, Photoluminescence Differences.” Edinburgh Instruments, www.edinst.com/ blog/photoluminescence-differences/ [4]Monterey Bay Aquarium Research Institute (MBARI). “The Allure of Fluorescence in the Ocean.” YouTube, YouTube, 23 Aug. 2019, www.youtube.com/watch?v=whbeFXFZqiU&feature=youtu.be [5] “Bioluminescence: Chemical Principles And Methods.” Google Books, Google, books.google.com.hk/ books?hl=en&lr=&id=yMLICgAAQBAJ&oi=fnd&pg=PR5&dq=bioluminescence&ots=ITJrdb8S_X&sig=XdKtdlPhIm6YO3lLAGOEoK4wng&redir_esc=y#v=onepage&q&f=false [6] “The Bioluminescence Web Page.” The Bioluminescence Web Page, biolum.eemb.ucsb.edu/ [7] Bhagat, Abhishek. “Harnessing Bioluminescence.” LinkedIn SlideShare, 2 Nov. 2016, www.slideshare.net/ AbhishekBhagat17/harnessing-bioluminescence [8] Harleen Workman McAda. “Bioluminescence.” The American Biology Teacher, vol. 28, no. 7, 1966, pp. 530–532. JSTOR, www.jstor.org/stable/4441402 [9] Nicol, J. A. “Bioluminescence.” Proceedings of the Royal Society of London. Series A, Mathematical and Physical Sciences, vol. 265, no. 1322, 1962, pp. 355–359. JSTOR, www.jstor.org/stable/2414178 [10] Widder, E. A. “Bioluminescence in the Ocean: Origins of Biological, Chemical, and Ecological Diversity.” Science, vol. 328, no. 5979, 2010, pp. 704–708. JSTOR, www.jstor.org/stable/40655873 [11] “Luciferase Reporters.” Thermo Fisher Scientific - US, www.thermofisher.com/hk/en/home/life-science/ protein-biology/protein-biology-learning-center/protein-biology-resource-library/pierce-protein-methods/ luciferase-reporters.html [12] Shimomura, Osamu. Bioluminescence: Chemical Principles and Methods. World Scientific Publishing Co. Pte. Ltd., 2019 [13] MacKenzie, Steven. “Bioluminescence.” The Gale Encyclopedia of Science, edited by K. Lee Lerner and Brenda Wilmoth Lerner, 5th ed., Gale, 2014. Gale In Context: Science, https://link.gale.com/apps/doc/ CV2644030284/SCIC?u=hkharrow&sid=SCIC&xid=92707d85. Accessed 17 Oct. 2019 [14] “Gene Promoter.” Gene Promoter - an Overview | ScienceDirect Topics, www.sciencedirect.com/topics/ medicine-and-dentistry/gene-promoter [15] Ward, L.K. “Meet the Tiny Bacteria That Give Anglerfishes Their Spooky Glow.” Meet the Tiny Bacteria That Give Anglerfishes Their Spooky Glow, 8 May 2018, ocean.si.edu/ocean-life/fish/meet-tiny-bacteria-giveanglerfishes-their-spooky-glow [16] Yoshihiro Ohmiya. Applications of Bioluminescence, photobiology.info/Ohmiya.html [17] Ocean Portal Team. “Bioluminescence.” Smithsonian Ocean, 18 Dec. 2018, ocean.si.edu/ocean-life/fish/ bioluminescence [18] “Latz Laboratory.” Scripps Oceanography, scripps.ucsd.edu/labs/mlatz/bioluminescence/dinoflagellatesand-red-tides/dinoflagellate-bioluminescence/ [19] Konica Minolta. “Living Light: Is There a Future For Bioluminescence Technology?” Konica Minolta Sensing Americas, sensing.konicaminolta.us/blog/living-light-is-there-a-future-for-bioluminescence-technology/ 22


Blackbody Radiation and Planck’s Constant Edward Wei (Year 10, Peel)

By the late 19th century, it was the general consensus that nothing more could be discovered in physics. Physicists could calculate the motion of material objects using Newton’s laws of classical mechanics, and describe the properties of radiant energy using Maxwell’s mathematical relationships. The universe appeared to be a simple and orderly place, containing matter (which consisted of particles that had mass and whose location and motion could be accurately described) and light (which was viewed as having no mass and whose exact position in space cannot be fixed). Matter and energy were considered distinct and unrelated phenomena. However, there were several contradictions that continued to puzzle classical physicists. The one we will be looking at is known as the “ultraviolet catastrophe”.

Figure 1: A diagram of an electromagnetic wave

Source: https://www.clinuvel.com/photomedicine/physics-optics-skin/electromagnetic-spectrum/understanding-the-electromagnetic-spectrum

Light or electromagnetic radiation is a form of energy which travels in a wave, as shown in Figure 1 (Note: this is a simple depiction, but for our purposes it is enough). The crest is the tip of a wave; the trough is the bottom. The wavelength is the distance between the crests of 2 waves (measured in meters), and the amplitude is the distance between the crest and the trough divided by 2 (measured in meters). The wave period or frequency is the time taken to complete one cycle - as shown in Figure 1. It is measured in 1/time in seconds or Hertz. The frequency and wavelength of a wave are inversely proportional to each other.

Figure 2: Diagram of the electromagnetic spectrum. Source: https://www.miniphysics.com/electromagnetic-spectrum_25.html

The electromagnetic spectrum categorizes light into several categories as its frequency/wavelength varies. Higher frequencies or shorter wavelengths equate to higher energy and vice versa. The relevant part is the visible section, which is the only part of the entire spectrum that we can see. Our eyes are not sensitive enough to detect infrared whilst the lenses of our eyes block out ultraviolet (it is harmful to cells). We are able to see objects because they either emit light themselves, or they absorb light and reflect it in all directions, and some of the light reflected just so happens to enter our eyes. 23


The contradiction that continued to puzzle classical physicists came in the form of 'black bodies'. Black bodies are idealized physical objects that do not simply reflect the light that they absorb: they absorb all frequencies of electromagnetic radiation that falls upon them and they can also emit radiation of any wavelength, but it is not usually in the visible light range, so we do not see it. However, the highest intensity wavelength emitted depends on temperature (Figure 3). The radiation emitted from it is the kind released by any object with temperatures above absolute 0 (-273˚C). We do not usually see this radiated energy: at ambient temperature, the wavelength of the emitted radiation falls beyond the visible light spectrum, such as infra-red wavelengths. This is why we can feel hot things, even when we don’t come into direct contact with them or see any glow. Let’s take a real life example: molten iron glows red because while most of the energy radiated from it is within the infra-red spectrum, a portion of the energy has a high enough frequency to be visible red light. As objects get hotter, they emit radiation with shorter and shorter wavelengths, so it is within the visible spectrum or even shorter. Classical physicists use the equation I = 2f2kBT/c2 or the Rayleigh–Jeans Law to approximate the total energy in terms of electromagnetic radiation as a function of wavelength from a black body at a given temperature, where f is frequency, kB is Boltzmann’s constant, T is the temperature (in Kelvin) and c is the speed of light (in metres per second).

Figure 3: A graph comparing radiation intensity predicted by the Rayleigh-Jeans Law and the actual data. (Source: chem.libretexts.org) The equation predicts that as the wavelength decreases, radiation intensity should increase without a limit at any temperature, as shown by the dotted line. It does not explain the sharp decrease in the radiation intensity emitted at shorter wavelengths. In fact, it does such a poor job of describing what really happens that it was dubbed the “ultraviolet catastrophe”. This contradiction was solved when German physicist Max Planck proposed that the energy of electromagnetic waves was quantized. We say something is quantized when the number of possible values is limited to certain discrete magnitudes, which are multiples of a constant value (quanta). Although quantization may be an unfamiliar concept, it’s all around us. For example, US money is an integral multiple of pennies. Musical instruments such as the piano can only produce certain musical notes like C or F sharp. Even electrical charges are quantized - ions can have a charge of 1 or -2, but not 1.46.

24


This was the equation Planck derived:

En = nhv

E = the energy in joules and represents the total amount of energy radiated from a black body (although generally E now represents the amount of energy in a light beam) n = the number of protons in the light beam, a very large integer. h = 'Planck’s constant', 6.626 x 10-34 joule-seconds v = the frequency of the light wave in hertz Planck explained that at low temperatures, radiation with relatively large wavelengths (low frequencies) is emitted. As temperature increases, the emission of radiation with shorter wavelengths (higher frequencies) becomes more probable. This is why hotter black bodies are able to emit electromagnetic radiation at higher maximum intensities. However, at any given temperature, objects would rather emit many low-energy photons over a single high energy photon of equivalent total energy. The 'discovery' of the quantized nature of electromagnetic radiation was an incredible breakthrough. In the early 20th century, physicists were so sure that they had discovered everything that one respected member even said, “There is nothing new to be discovered in physics now. All that remains is more and more precise measurement.” Lord Kelvin 1 Planck’s discovery led to a new branch of Physics: Quantum Physics, opening a vast world that we have barely scratched the surface of. This discovery fundamentally changed the way physicists perceived the world. In the past, physicists viewed mass as a “thing” with a definite location and velocity, like a car travelling west at 10m/s. However, we now know that the concepts of location, velocity and even existence itself blur at the atomic and subatomic level. For reasons we do not understand, electrons exist everywhere at once, with the probability of them “existing” in certain areas being higher than other areas. This fundamental difference in the way mass exists at the macro and the micro still befuddles physicists today.

AFTER NOTE Max Planck didn’t actually know why the energy of electromagnetic waves was quantized. He derived this equation according to the graph of real data and through the assumption that energy carried by electromagnetic radiation was quantized. Similarly, he was able to guess that n is a very large integer, but was unable to identify its exact value. It was only explained later on when Albert Einstein discovered that light could have both wavelike and particle-like properties. This is because the energy that can be carried by a single photon (light particle) is fixed. The equation above actually calculates the energy within a light beam, which is why n is a large integer, as there are many photons in a light beam. The equation that calculates the energy of a single photon is Ephoton = hv. The radiation comes in discrete packets, because you cannot have half a photon of energy. Now you may be asking, why does Planck’s equation conclude that energy is quantized? Sure, h is a constant, but isn’t v (frequency) an infinitely variable quantity? If so, how can the energy of a photon have only certain values? Yes, the spectrum of “allowed” energies of a photon is continuous i.e. a photon can have any energy, but for a given frequency, the energy exchange can only take place in jumps of hv. For example, if object A’s initial energy is equal to hv and we increase its energy by giving it another photon of energy with equal frequencies, object A has EInitial + hv worth of energy. It is impossible for object A to have a final energy in between EInitial and EInitial + hv. This is ironic, as Lord Kelvin actually contributed to the expansion of physics as a science!

1

25


BIBLIOGRAPHY Khan Academy https://chem.libretexts.org/Bookshelves/General_Chemistry/Map%3A_Chemistry_-_The_Central_ Science_(Brown_et_al.)/06._Electronic_Structure_of_Atoms/6.2%3A_Quantized_Energy_and_ Photons https://www.youtube.com/watch?v=GgD3Um_f0DQ https://www.physicsforums.com/threads/how-can-energy-be-quantized-with-e-hv.338298/ https://www.youtube.com/watch?reload=9&v=pmM28gQZTXc https://en.wikipedia.org/wiki/Rayleigh–Jeans_law

26


SECTION 2

APPLICATION

OF SCIENCE Photo by Alec Favale on Unsplash

27


'ROBOTICS & AI'

Callum Begbie (Year 6, Darwin) 28


Robotics and Artificial Intelligence: How Far Can and Should AI Take Us? Jett Li (Year 12, Peel)

1 INTRODUCTION In recent times, arguably the most exciting (as well as controversial) field of study is that of Artificial Intelligence (AI) and Robotics. As our world becomes increasingly dependent on machinery and automation as solutions to many of the issues we face, researchers all across the world have begun to search for more ways to integrate AI into our world. The result of this research and experimentation is apparent. Breakthroughs are being made everywhere, from medicine to economics. We’ve all seen and heard of the potential machines have in the medical field, how they’re unmatched in efficiency and effectiveness and could possibly replace surgeons in the near future, as reported by Katrina Tse in the Scientific Harrovian (Issue IV, 2019). Similarly, many have seen the capabilities of machine learning in finance, especially the ability AI has to create perfect stock profiles and predict its growth to a reasonable degree of accuracy. The results we have seen have been staggering, and the only way is up. Even now, scientists are finding more ways to use AI in our daily lives, which begs the questions: How did we get to this point? Where do we go from here? Where do we draw the line? This article will explore these questions in depth and go into detail about the history of AI, what kind of breakthroughs are being made right now, and the ethical and moral issues that accompany the development of AI.

2 HISTORY AND DEVELOPMENT OF ARTIFICIAL INTELLIGENCE 2.1 BEGINNINGS OF RESEARCH The development of Artificial Intelligence began in the early 20th Century. Spurred on by fictional pieces of work that depicted fantastical forms of artificial beings and their uses, including the Wizard of Oz (1900), Metropolis (Figure 1), and even Frankenstein (1818), researchers began to explore different ways to create a sentient non-human being. This field of study truly began to take off in the 1950s, when an entire generation of scientists, philosophers and thinkers were exposed to widespread media coverage (both fictitious and real) on the possibilities of Artificial Intelligence.

Figure 1: A scene from the 1927 German expressionist sci-fi film, Metropolis (Source: lexpress.fr) 29


At this point in time, Alan Turing, the ‘father of AI and theoretical computer science’, theorised that machines could use reasoning and logic to solve problems much like humans do. He stated in his 1950 paper ‘Computing Machinery and Intelligence’ that if this were accomplished, it would be possible to build ‘intelligent’ machines and also test their level of intelligence relative to each other and to humans. However, extensive research based on this idea was not immediately purused, as computers were extremely expensive at the time and unobtainable for all but the most prestigious research facilities and universities [1]. However, five years later, a proof of concept - evidence that this theory was feasible - was created by researchers Allen Newell, Cliff Shaw, and Herbert Simon (Figure 2), through what is thought to be the first Artificial Intelligence programme in the world: Logic Theorist. This programme was created to mimic the problem-solving capabilities of a human being and was first presented to the world at the Dartmouth Summer Research Project on Artificial Intelligence (DSRPAI). Despite not making major scientific waves immediately, event attendees unanimously agreed that AI was a viable field of study and that the creation of a true sentient artificial intelligence Figure 2: Herbert Simon (L) and would be possible in the future. On top of this, Logic Theorist would Allan Newell (R), two of the later be used in several other fields, most notably helping prove 38 of developers of Logic Theorist the 52 theorems in the renowned Principia Mathematica (Whitehead Source: https://www.computerhistory.org and Russell), and producing new and more elegant solutions for some, showing the viability and adaptability of AI. The Logic Theorist would then be the basis for the next two decades of AI-based research all across the globe. By 1974, breakthroughs in computer science had allowed researchers to create more intricate AI programmes such as Newell and Simon’s General Problem Solver (a follow-up programme to the Logic Theorist) and Joseph Weizenbaum’s ELIZA, one of the first computer programmes that was capable of understanding and interpreting human speech. Afterwards, government-funded projects such as the 1982 Fifth Generation Computer Project pushed the development of AI forwards, placing AI computing in the limelight and inspiring generations of programmers to help develop this technology further. By 1997, Deep Blue became the first AI programme to defeat a Grandmaster in Chess, acting as a benchmark for Artificial Intelligence development as machine had finally triumphed over humans in logical thinking and decision making - something unthinkable just a few decades prior. 2.2 TYPES OF ARTIFICIAL INTELLIGENCE Artificial Intelligence can be split into three smaller subsets: Artificial Narrow Intelligence (ANI), Artificial General Intelligence (AGI), and Artificial Superintelligence (ASI). ANI is the form of Artificial Intelligence that deals with specific functions, and is limited in scope. Many programmes we use today fall within this category as they are created to work in specific environments for specific purposes. AGI is the form of Artificial Intelligence that deals with several fields at once. Programmes like this include those that problem-solve and perform levels of abstract thinking. Lastly, ASI transcends human intelligence and performs on a higher level across all fields. This is ultimately what researchers are working towards [2]. For the majority of AI development, researchers have been creating differing forms of ANI, slowly closing in on the goal of creating an AGI programme on par with normal human intelligence. This is a goal that researchers are very close to completing, as it has been estimated that a fully functional AI on with this level of intelligence will be completed by 2020.

30


AI can also be classified based on whether it is ‘strong’ or ‘weak’. Weak artificial intelligence programmes are able to respond to inputs by identifying and matching a command to a task that is has already been programmed to do. For example, asking a smart home programme to turn on the lights works because the programme recognises the key terms ‘lights’ and ‘on’, thus allowing them to associate what is being said with a command to turn on the lights. However, this also means that weak AI programmes do not truly understand what they are being told and require a baseline programme to associate commands to in order to function. On the other hand, strong AI programmes use what is known as clustering and association to process the data they are fed. Unlike weak AI programmes, strong AI doesn’t need programmed responses to fit the inputs they receive, instead they behave more like a human brain and are capable of creating a suitable response by themselves. 2.3 THE SCENE TODAY Today, the most impactful subfield of AI research is that of ‘machine learning’, which is slowly being integrated into every aspect of our lives. Corporations like Amazon and Google, for example, are using machine learning and data processing AI programmes to sort through massive data sets to match users with the best advertisements and recommendations in order to keep us entertained. Others are trying to branch out into different fields of study with the aim of improving our quality of life. Very soon we could see AI in the medical field, performing surgeries and prescribing medicine in place of human doctors and pharmacists. AI may also be able to educate children or act as an alternative to psychiatrists or psychologists in the near future with their enhanced cognitive abilities and adaptability to situations. While this seems similar to what we have with human psychologists at first, the fundamental difference lies in the ability of AI programmes to run through data sets too large for humans to manage and therefore they can make better, more informed choices when dealing with individual cases of psychological issues [3]. Besides improving the quality of human life, AI is also on track to improve several facets of the entertainment industry. AI programmes are now becoming even more important in the personalisation of a user’s experience on an entertainment platform and are key components of targeted marketing and advertising campaigns run by Entertainment and Media (E&M) companies. Many corporations are even looking into the creation and implementation of machine learning programmes that can help develop and create advertisements and trailers for upcoming projects [4]. Other branches of practical AI research that may become usable in the near future include space exploration, automating audio post production and self driving vehicles, as described by Ayuka Kitaura in the Scientific Harrovian (Issue IV, 2019).

3 BREAKTHROUGHS AND NEW USES FOR ARTIFICIAL INTELLIGENCE 3.1 TREATING ASD WITH SOCIAL ROBOTS Since the turn of the century, the frequency of children being diagnosed with ASD (Autism Spectrum Disorder) has shot up, particularly in countries such as the United States. Today, around 1 in 59 children in the United States have been identified as having ASD, according to the Centres for Disease Control and Prevention (CDC), compared to 1 in 150 just 20 years ago. With this explosive growth in individual cases of ASD, the demand for treatment or forms of therapy has similarly increased substantially. If the ASD ‘epidemic’ was similar to any other form of pathogenic outbreak, dealing with the influx of cases would be costly and difficult to deal with at first, but still ultimately manageable. However 31


unlike treating diseases or physical ailments, treating a developmental disorder such as ASD is far more complex and therefore more difficult due to the nuances in each and every case. This means that treating the symptoms of this disorder, which include lack of body language, not understanding social cues, not recognising or giving eye contact, and lacking variance in their tone of voice, is something that needs to be targeted and tailor-made for each and every person in need of treatment. For years, treatment has been done by therapists and doctors, human beings who could slowly adapt to and learn how to deal with their patients and improve their social skills one at a time. People were under the impression that treating developmental or psychological disorders is a uniquely ‘human’ job, since only human beings were capable of adapting their methods of treatment to suit the individual patient. Only humans can teach each other to act and socialise in a correct or proper way. However, soon this may not be the case. One of the biggest issues with the status quo is that treatment is largely inaccessible due to the high demand but low supply of treatment (the limited number of psychiatrists or children’s doctors). As such, researchers such as Brian Scassellati, a robotics expert and cognitive scientist at Yale University, are in the process of finding a way to get around these shortcomings of ASD treatment. Scassellati has developed a number of social robots (Figure 3) to interact with children afflicted with ASD to see if an AI programme could be more effective than a human at teaching children to pick up social cues. The results he found were striking: after just one session with the robots, many of the children he had been working with were exhibiting forms of normal social behaviour: talking, laughing, and making eye contact during the Q & A session that followed the AI therapy. These milestones in the treatment of ASD, which would usually take some human therapists weeks or even months to accomplish, took the social robots just 30 minutes [5].

Figure 3: One of Brian Scassellati’s social robots. (Source: news.yale.edu) The reason behind the effectiveness of these robots is the subject of much debate. Scassellati, among many of his colleagues, believed that the human-like qualities of the robot the children were interacting with helped make children more responsive to its actions. However, after further testing, it appeared that there was no real difference between using a humanoid robot and a non-humanoid robot as the children responded in almost identical ways. On the other hand, there was a major difference in the quality of treatment between using robots and using a tablet/screen as a replacement. Children were shown to respond far worse to a simple screen projection, which suggested to Scassellati and his team that, while it did not matter how humanoid the robots were, it was the robot’s ability to respond to the child’s actions, without having the inherently ‘human’ qualities such as non-verbal communication, that were so overwhelming to the children in the first place, that made them so effective in their treatment [6]. 32


Since then, Scassellati has conducted many more in-depth studies into how these newly programmed social robots would be able to impact and help the children and families they were loaned out to. During a one-month-long study, he provided 12 families with a tablet computer filled with social games and a new robot called ‘Jibo’ which would provide constant and immediate social feedback to the children during and after their games. Over the course of this month long study, all 12 subjects were shown to have made significant progress in their social development. Many of the children were starting to make prolonged eye contact when talking, initiating conversations, and responded better to communication. However, this form of robot/AI based therapy is still in its nascent stages, as there is no definitive proof that therapy sessions would provide any permanent change in the children’s social skills. Scassellati later stated that “We wouldn’t expect to see any permanent change after just 30 days.”; however, he also believes that this field of study is “very promising”. Many experts in the field feel the same way, and believe that since human-led therapy sessions are both scarce in quantity and expensive, robot-based therapy could be used to enhance the effectiveness of the therapy sessions. In Scassellati’s own words: “Most families can’t afford to have a therapist with them every day, but we can imagine having a robot that could be with the family every day, all the time; on demand, whenever they need it, whenever they want it” [7]. 3.2 ALGORITHMS TO AID FUSION TECHNOLOGY RESEARCH One of the most pressing issues of the modern world is that of energy consumption. As nations develop, the human race as a whole has become more and more dependent on resources used to power our homes and industries. For the past two centuries or so, this has meant using large amounts of fossil fuels such as oil, natural gas, and coal. However, by the mid to late 1900s, alternative forms of energy production began to surface, including solar, wind, hydroelectric, and nuclear power. Currently, 14% of Earth’s electricity is generated by nuclear power plants. However this figure varies heavily depending on the country, as LEDCs (Less Economically Developed Countries) may only use nuclear power for less than 1% of their energy requirements while MEDCs (More Economically Developed Countries) such as France may use it for up to 75% of its electricity. The reasons why richer countries tend to gravitate towards nuclear power are obvious: nuclear power plants are far more efficient than conventional, fossil fuel-based power plants, and nuclear power is considerably more sustainable and generally less damaging to the environment. In this modern day and age, efficient and clean energy is the way forward for many of the world’s leading nations as they look for ways to maintain energy production or even increase it without contributing even more to the growing issue of global warming. However, the nuclear power stations used today are still ‘fission’ power plants, meaning they operate by splitting unstable atomic nuclei - typically Uranium-235 - into smaller ‘daughter’ nuclei (releasing energy while doing so). As nuclear fission is based on splitting large atomic nuclei into smaller parts, the byproducts of nuclear reactors are mainly radioactive in nature and can be deadly to living organisms if they are left exposed to the radioactive substances for too long. On top of this, the nuclear waste produced in such reactors have an incredibly long half-life, meaning disposal of this waste is both difficult and expensive. This, coupled with the violently reactive nature of the substances involved in nuclear fission, is what prevents many nations from fully embracing and adopting nuclear fission as a major form of energy provision. As a result, scientists have been attempting to develop an industrially viable form of nuclear fusion technology, which generates energy by fusing small, harmless nuclei together in the same way that stars in the universe do, providing a clean and just as efficient alternative to replace current fission technology. This would solve the biggest issue people currently have with nuclear energy: that it is dangerous and potentially disastrous, with historical evidence such as the Chernobyl Disaster and Fukushima Meltdown.

33


However, there have been several setbacks in the development of fusion technology. Research into fusion reactors has been going on since as early as the 1940s, but technical difficulties, such as the sudden loss of confinement of plasma particles and energy during the reaction, have held researchers back for decades. ITER, the leading authority on fusion technology and a collaborative project between China, the EU, India, Japan, South Korea, Russia and the USA, estimates that the first commercially viable fusion reactor will be available sometime after 2054. However, with the help of AI and deep learning programmes, this may actually be accomplished much sooner than expected. In the US Department of Energy’s Princeton Plasma Physics Laboratory (PPPL) and Princeton University, scientists are beginning to use AI programmes to forecast and prevent sudden and unexpected nuclear fusion reactions that may damage the tokamaks, the devices that confine them (Figure 4) [8] [9].

Figure 4: Depiction of fusion research on a doughnut-shaped tokamak enhanced by artificial intelligence [8] The deep learning code, the Fusion Recurrent Neural Network (FRNN), is built upon the massive databases provided by decades of research from the DIII-D National Fusion Facility and the Joint European Torus, two of the largest fusion facilities in the world. From the new data, FRNN is able to train itself to identify and predict disruptions on tokamaks outside of its original dataset (from Princeton’s facilities) using ‘neural networks’, which use mathematical algorithms to weigh the input data and decide whether a disruption will occur, or how bad the disruption will be, adjusting itself after every session for mistakes made. A disruption, caused by a rapid loss in stored thermal and magnetic energy due to growing instability in the tokamak plasma, can lead to the melting of the first wall in the tokamak and leaks in the water cooling circuits. Soon, FRNN will reach the 95% accuracy threshold within the 30 millisecond time frame required by researchers in the field of study, which means it will soon be a practical and critical part of fusion research in the near future. This new way of understanding and predicting disruptions, one of the key issues that have plagued fusion development for decades, may also evolve into an understanding of how to control them [10]. Julien Kates-Harbeck, a Harvard physics graduate and a collaborator of the FRNN project, stated that although FRNN is currently able to predict and help mitigate the devastating impacts of disruptions, the end goal is to “use future deep learning models to gently steer the plasma away from regions of instability with the goal of avoiding most disruptions in the first place.” However, moving from an AI-based predictive programme to one that can effectively control plasma would require much more work and comprehensive code which encompasses the first principles in physics. While this seems like yet another setback in a century-old quest for the so-called ‘holy grail’ of energy, Bill Tang, a principal research physicist in PPPL, states that controlling plasma is just “knowing which knobs to turn on a tokamak to change conditions to prevent disruptions” and that it is “in our sights and it’s where we [PPPL team] are heading” [11].

34


3.3 APPLICATIONS OF AI IN THE MEDICAL FIELD The main issue with the way surgeries are conducted in the modern world is how they can easily be impacted by human error. Operations are complex and convoluted procedures that require the operators to have a specific skill set for each and every patient they come across. Many of these surgical procedures involve making extremely precise cuts and incisions, leaving little to no room for error. As a result, the possibility of human mistakes impacting the success of a procedure is extremely high. The possibility of mistakes are further exacerbated by the high-stress environments surgeons and doctors work under as well as the long shifts they typically take. As a result, surgeries are less efficient and more timeconsuming than they otherwise should be. This is where AI-assisted surgeries come in. Even though these programmes are nowhere near sophisticated enough to complete operations on their own, they are more than capable of helping reduce a surgeon’s variations that could affect the outcome of the procedure and the patient’s recovery. Doctor John Birkmeyer, chief clinical officer at Sound Physicians, stated that “a surgeon’s skill, particularly with new or difficult procedures, varies widely, with huge implications for patient outcomes and cost. AI can both reduce that variation, and help all surgeons improve – even the best ones.” Programmes are able to help surgeons determine what is happening during complex surgical procedures by providing real-time data about the movements and cuts the surgeons make. This allows surgeons to work with a greater degree of precision and accuracy as they are able to adjust and adapt to the information they are given on the fly. According to a study, AI-assisted surgeries also result in fewer complications compared to human surgeons working by themselves [12].

Figure 3: An artistic depiction of the role of AI in surgery. (Source: Robotics Business Review) Even though this kind of technology is still in its early stages, it’s already seeing great success around the world as an aid to conventional forms of surgery and, when paired with robotic systems, can perform tasks such as suturing small blood vessels (as thin as 0.03mm-0.08mm). The possibilities of AI-assisted surgeries are seemingly endless. However, AI can also benefit the medical field in ways beyond just this. Another key development in the medical AI field is the creation of virtual nursing assistants. A significant expense of any hospital or healthcare facility is the money spent on nurses and caretakers of patients. From interacting with patients to directing patients to the most effective care setting, these workers deal with many aspects of patient treatment by giving physical support through treatments, as well as emotional support through basic interactions and care-taking duties. It has been estimated that nurses account for close to $20 billion USD annually in the healthcare industry, a figure that could be drastically reduced after the introduction of AI-based, virtual nursing assistants.

35


Virtual nursing assistants would resolve many of the issues that the current healthcare industry has to contend with. Similar to how mental health robots and surgical-assisting AI help pre-existing doctors and psychiatrists deal with their workload more efficiently, virtual nursing assistants (which operate 24/7) can monitor patients, interact with them, and help dispel worries patients may have about their own well-being. On top of their ability to interact directly with patients, these programmes are also able to help with administrative tasks such as scheduling meetings between patients and doctors and even prescribe medication for patients leaving the hospital. All things considered, the potential of these virtual nursing assistants are immense and could save the industry billions of dollars in the long run, as less money is needed to pay the nurses and care-takers, which used to be distinctively ‘human’ jobs.

4 SHORTCOMINGS OF AI AND ROBOTICS 4.1 LACK OF EMOTIONAL INTELLIGENCE One major area of concern in the development and use of AI is their perceived lack of humanity and human-like qualities. Many experts in fields that deal with interpersonal relationships or social interaction (including psychologists, therapists and doctors) believe that AI or social robots will be incapable of ever fully replacing humans in these fields. The gap between a programme designed to mimic humanity and true social interactions is seemingly insurmountable, which leads many to believe that AI programmes, no matter how comprehensive or well-made, will only be able to play a supporting role to humans [13]. However, this shortcoming of artificial intelligence may soon be dealt with. While it is true that current programmes cannot effectively identify human emotions or social cues in the same way humans can, breakthroughs in the research of ‘emotional AI’ with programmes that can detect and recognise emotions (a process called facial coding in market research) then react accordingly, may one day help artificial intelligence programmes step into these previously ‘untouchable’ fields. Experts believe that emotional AI could be used as a medical tool to help diagnose mental ailments such as depression and dementia, or be used in medical facilities as ‘nurse bots’ to monitor and support patients [14]. 4.2 SENTIENT PROGRAMMES

Figure 5: example of emotional detection software [13]

Perhaps the biggest worry people have about the development of Artificial Intelligence is the possible rise of sentient AI programmes. While our current level of technology is still several decades away from being able to create programmes powerful enough to pose the problem the general public envisions, the potential issues of sentient AI are still food for thought. For a programme to be classified as ‘sentient’, it must first fulfil two requirements: first, the programme must be capable of being self-aware, and secondly, the programme must be able to be classified as ‘conscious’. As of right now, humanity has already created several programmes capable of rudimentary self-awareness, many of which are capable of analysing and adapting to their surroundings without prompts from code. However, although humanity is capable of creating robots that can mimic a human’s consciousness and understanding of themselves/the outside world, we are still unable to make programmes capable of this without using data from the internet, or by simply learning and understanding consciousness by themselves [15]. 36


As humanity has just entered the AGI phase, it is unlikely that we will experience anything like what we see in films such as The Terminator or Ghost in the Shell (Figure 6). A survey conducted in 2016 revealed that the majority of experts in the field of AI research and computer science feel that it will take at least 50 more years for technology to develop to the point where AI will be able to replace skilled human jobs. Right now, most programmes operate based on memorising data, identifying patterns within them, and slowly learning to match new pieces of input data to the pattern they have identified, which is very different from how human minds operate (based on prediction rather than memorisation). Facebook’s head of AI development, Yann Lecun, stated that in terms of unfocused intelligence, AI programmes are currently behind rats. As a result of this, it seems more reasonable to see the rise of sentient AI as a potential long term issue after humanity reaches the ASI research stage; however, until then, no one is in danger of being overtaken or undermined by these programmes [16].

Figure 6: An example of a currently unachievable robot from The Terminator franchise (Source: Film Stories)

5 CONCLUSION To conclude, the potential of Artificial Intelligence programmes are limitless. As technology and our understanding of computer science develops, humanity will inevitably become increasingly dependent on the abilities of AI programmes. In the short term, we will see developments and improvements in industries such as healthcare and energy, which will help propel the human race forward. Currently, fears of AI becoming too influential or powerful, or even developing ‘sentience’ are unfounded, as humanity is still decades away from being capable of developing Artificial Superintelligence (ASI) programmes, which are truly sentient or self-aware. In the foreseeable future, the development and growth of AI-based fields of research should be monitored, but not restricted. The only real harm AI can do to humanity in the next 20-30 years or so is lowering the demand for certain jobs, which would become increasingly automated. The upsides to this are explosive economic, educational and healthcare developments. AI programmes can be a powerful and useful tool during this period of time, however upon reaching the Artificial General Intelligence phase (AGI), humanity needs to become more wary of these programmes and begin to place restrictions and limitations on their usage. Until that point, we should enjoy the benefits brought by these programmes to the fullest.

37


BIBLIOGRAPHY

[1] “The History of Artificial Intelligence” - Harvard http://sitn.hms.harvard.edu/flash/2017/history-artificial-intelligence/ [2] “The Evolution of Artificial Intelligence” - UBS https://www.ubs.com/microsites/artificial-intelligence/en/new-dawn.html [3] “AI in Movies, Entertainment, Visual Media”- Emerj https://emerj.com/ai-sector-overviews/ai-in-movies-entertainment-visual-media/ [4] “The State of AI in 2019”-The Verge https://www.theverge.com/2019/1/28/18197520/ai-artificial-intelligence-machine-learningcomputational-science [5] “How 30 Days with an in home robot could help children with ASD”-ScienceMag https://www.sciencemag.org/news/2018/08/how-30-days-home-robot-could-help-children-autism [6] “Artificial Intelligence and Autism”-Towards Data Science https://towardsdatascience.com/artificial-intelligence-and-autism-743e67ce0ee4 [7] “Improving social skills in children with ASD using a long term, in home, social robot” - ScienceMag https://robotics.sciencemag.org/content/3/21/eaat7544 [8] “Artificial Intelligence accelerates development of limitless fusion energy”- SciTechDaily https://scitechdaily.com/artificial-intelligence-accelerates-development-of-limitless-fusion-energy/ [9] “A Nuclear Powered World” -NPR https://www.npr.org/2011/05/16/136288669/a-nuclear-powered-world [10] “Fusion Energy Pushed back beyond 2050” -BBC https://www.bbc.com/news/science-environment-40558758 [11] “Plasma Disruptions : A task force to face the challenge” - Iter https://www.iter.org/newsline/-/3183 [12] “How AI assisted surgery is improving surgical outcomes” - Robotics Business Review https://www.roboticsbusinessreview.com/health-medical/ai-assisted-surgery-improves-patientoutcomes/ [13] “Emotion AI overview” - Affectiva [14] “13 Surprising Uses for Emotion AI technology”-Gartner https://www.gartner.com/smarterwithgartner/13-surprising-uses-for-emotion-ai-technology/ [15] “Researchers are already building the foundation for sentient AI” - VentureBeat https://venturebeat.com/2018/03/03/researchers-are-already-building-the-foundation-for-sentient-ai/ [16] “How far are we from truly human like AI?” - Forbes https://www.forbes.com/sites/forbestechcouncil/2018/08/28/how-far-are-we-from-truly-human-likeai/#3b494a2031ac

38


Bitcoin Explained Josiah Wu (Year 12, Churchill)

Seriously, what is this? In 2017, there was a huge frenzy surrounding Bitcoin. Not only did it attract attention from economists and investors, it also became a hot topic amongst the general public. In fact, throughout 2017, questions like “How to buy Bitcoin?” and “How to mine Bitcoins?” were in the Top Ten list of the most trending ‘How to’ queries on Google [1] and it even had its own subreddit [2]. But what is Bitcoin? To start, we need to understand that Bitcoin is, in fact, a type of cryptocurrency. Cryptocurrency is a digital currency; however, unlike the conventional currencies we know (such as US dollars or UK pounds), cryptocurrency is not controlled by any central authority. If I wanted to make a transaction with Bitcoin, the transaction would not be verified by any bank or government, but it would instead rely on a network of computers that govern transactions.

Figure 1: Depiction of a Blockchain (Source: matejmo, Getty Images)

Cryptocurrencies rely on a system called blockchain, which is an accessible public ledger that records all the valid transactions made between users. It is designed in a way such that it is nearly impossible for anyone to tamper with it. Blockchain consists of two parts: the ‘block’ and the ‘chain’. There is a long list of valid transactions within each ‘block’, and each block is labelled with an individual identity called ‘hashes’. The ‘chain’ connects one block to another to form a blockchain. Nobody owns or controls this public ledger. Instead, volunteers (known as Miners) update the public ledger by creating a new ‘block’ and connecting it to one of the old blocks, helping the system circulate and function. The first person to update the ledger is rewarded with approximately 12.5 Bitcoins (~US$100,000 after conversion, as of Feb 2019 [3]). However, updating the public ledger is a laborious job. When a deal is struck and confirmed, the transaction is announced publicly into the Bitcoin network. Miners must first gather and verify one megabyte worth of those transactions into a block, and then solve an extremely difficult cryptographic ‘puzzle’ before they can upload their block onto the blockchain. This whole process is known as ‘mining’.

CRYPTOGRAPHIC HASH FUNCTION To understand this ‘puzzle’, we need to first grasp the idea of a cryptographic hash function. This function converts a string of plain text into a hash, or a string of binary numbers (also known as ‘bits’). The one Bitcoin uses is the SHA-256, which is a common cryptographic hash function used for internet security. One characteristic of this cryptographic hash function is that every input has a unique output, and a minor change to the input affects the output drastically. For example, if we want to convert the word “Bitcoin” using SHA-256, it would output* a string of 256 bits which starts with these 16 bits: 1011010000000101... However, changing the input by replacing the upper case “B” to lowercase, “bitcoin” yields an output* which as you can see, already differs significantly in only the first 16 bits: 0110101110001000... 39


SHA-256 is also computationally infeasible to program in the reverse direction. That is, if given the output, it would be immensely difficult to determine the matching input even with the world’s most efficient computers.

PROOF OF WORK Cryptographic hash functions are used to generate hashes for each block. Each hash is dependent on several characteristics within its respective block; this includes its transaction history, its timestamp, and a matching input (which will be discussed later). Besides this, the hash of the previous block is also involved in the generation of the hash, hence every block is interconnected with each other. These linkages, therefore, form a ‘chain’ (Figure 2).

Figure 2: An example of a blockchain This is where the ‘puzzle’, as mentioned earlier, comes into play. The problem is to find a particular input such that it induces a hash with a specific amount of zeros at its beginning (Figure 3). This is known as the Proof of Work (PoW).

Figure 3: An example of a correct and incorrect input generating the correct hash and incorrect hash, respectively The required amount for Bitcoin is 30 leading zeros. What is the probability of finding such hashes in a single try? We can find this out by doing the below calculation:

½ x ½ x ½ x ½ … = (½)30 or 1 in 1.07 billion For comparison, the odds of winning a jackpot in Mark Six (a legal lottery in Hong Kong) is 1 in 140 million. This demonstrates that the chances of acquiring the correct input is remarkably slim. Since solutions to reverse-engineer cryptographic hash functions are yet to be found, Miners are left with no option but to use the trial-and-error method - randomly guessing the input until the correct one is found. Hence this system enables fair competition between the ‘miners’ and incentivizes voluntary updating of the ledger.

40


WHY IS PROOF OF WORK NECESSARY? Proof of Work is necessary because it inhibits anybody from altering the contents within the blockchain without getting caught. Imagine David, who is very greedy, wants to edit the transaction history so that Cameron pays him $450 instead of $45 (Figure 4).

Figure 4: Original transaction history that David wants to edit David hacks into the Bitcoin system to edit the transaction history. However, he soon realises that he has run into trouble: as the transaction history is changed, the hash is affected, therefore making the it invalid. This, in turn, affects the hash of the neighbouring blocks, as the previous hash is needed to generate a new hash (Figure 5).

Figure 5: David's attempt to edit the history In an attempt to reverse this, David then tries to recalculate all the matching inputs to ensure all the following hashes are valid, as if nothing has gone wrong. However, he has to redo all the Proof-of-Work of the next few blocks before anyone manages to do one so that nobody can tell the difference (Figure 6).

Figure 6: Blockchain PoW that David needs to redo As doing the Proof of Work of each block is extremely time consuming, there is a very small possibility of David redoing and finishing the PoW in time. In the end, David is caught red-handed.

41


From this scenario, we can, therefore, conclude that the Proof of Work system is vital as it minimises the likelihood of successful unauthorised changes within the blockchain.

ENSURING VALID TRANSACTIONS It was briefly mentioned that transactions are broadcasted into the Bitcoin network. But how can we ensure that no fraudulent transactions occur within the Blockchain? Digital signatures are therefore implemented; similar to hand-written signatures, digital signatures act as verification from the deal initiator to show that they have approved of this transaction. However, to prevent forgery of signatures, the scheme of private key and public key are borrowed from cryptography. Private key ensures that every signature is authentic and unpredictable. As the name suggests, it is only accessible to the deal initiator. A function is used such that it outputs a signature: Sign(Transaction Information, Private Key) → Signature The deal initiator has to use that signature to make the transaction valid. Once the transaction is confirmed, it is then sent into the Bitcoin network. The miners then have to check if the transaction is authorised, which is a process that involves another function: Verify(Transaction Information, Signature, Public Key) → True/False An output of ‘True’ indicates that the signature is correct and therefore authorised. The miner can then move on and include this transaction into their block.

CONCLUSION Despite the recent frenzy about Bitcoin, it is uncertain whether it will last in the foreseeable future. However, the system of Blockchain will be adopted for many other purposes, such as data sharing and cybersecurity. It is therefore my belief that Blockchain will play a vital role in running our future society. *SHA-256 is more commonly output as hexadecimal (base 16) https://emn178.github.io/online-tools/sha256. html I therefore encoded a programme to convert hexadecimal to binary:

BIBLIOGRAPHY [1] https://www.telegraph.co.uk/technology/2017/12/13/bitcoin-mania-googles-top-searches-2017dominated-digital-currency/ [2] https://www.reddit.com/r/Bitcoin/ [3] https://www.investopedia.com/terms/b/block-reward.asp https://www.investopedia.com/terms/b/bitcoin-mining.asp https://www.youtube.com/watch?v=kZXXDp0_R-w https://www.youtube.com/watch?v=bBC-nXj3Ng4&t=1121s https://bitcoin.org/bitcoin.pdf 42


The Tale of:

The Golden Ratio, Fibonacci Numbers and Lucas Numbers Helen Ng (Year 9, Gellhorn)

1 INTRODUCTION Do you ever just sit in a maths classroom and wonder who invented all these equations for us to remember off by heart? There must have been moments where you wondered who in their right mind would derive all these crazy equations just to entertain oneself - that must be insane. Unfortunately (or fortunately), there is no single man (or woman) to blame - nature has built itself with utmost consideration, and mathematicians have been lucky enough to catch a glimpse of the beautiful patterns, begotten not created.

2 GOLDEN RATIO Unfortunately, this ratio is not actually made of gold so it does not make us rich - I sure hope it did though! However, it sure brings us a lot of valuable knowledge and that is priceless. 2.1 HISTORY OF THE GOLDEN RATIO It is unknown when the golden ratio was discovered, but it holds significance throughout ancient Egyptian and Greek architecture [1]. Phidias (500-432 BC) was the first person observed to have applied phi to sculptures in the Parthenon. It is also suspected that the Egyptians used phi, Φ (as well as pi, π) in the building of the Great Pyramids (Figure 1). Euclid (365-300 BC), in his book “Elements”, referred to the golden ratio as “dividing a line in the extreme and mean ratio”, and this number is also linked to the construction of a pentagram (that doesn’t mean we summon the demons with phi, folks).

Figure 1: The Great Pyramids of Giza (Source: Richard Nowitz/Getty Images)

2.2 APPLICATIONS OF THE GOLDEN RATIO Nature never fails to provide mesmerising sights no matter where you are. Patterns on a leaf, the eye of a storm, the arrangement of the center of a sunflower... [2]. These aesthetics have inspired artists through centuries to produce artwork that never ceases to amaze. The secret behind all of this? The golden ratio. During 1509, Luca Pacioli wrote a book that refers to the golden ratio as the “divine proportion.” Leonardo Da Vinci then drew it out, and later called it the sectio aurea, which means the “golden section”. The golden ratio is also found in Baroque music, composed during the time period spanning approximately 1600-1750 by composers such as J.S. Bach. Bach is known for his neat and extremely symmetrical pieces, and in his piano two part inventions, the structure is especially prominent. Take the number of measures in a piece and multiply it by 1/Φ (0.618) - the product is the position of the measure where the piece reaches its climax.

43


2.3 DERIVING THE GOLDEN RATIO MATHEMATICALLY In a golden rectangle (Figure 2), the sides are denoted by a and (a+b), a being the side length of the biggest possible square within the rectangle and the rectangle of dimension a+b by a (blue + pink area) is similar to the rectangle of dimension a by b (pink area).

Figure 2: A golden triangle [3] This expression means any two numbers are in golden ratio when they satisfy this equation for a > b. Now, we label the ratio a/b to be a greek letter Φ “phi” (not to be confused with the Vietnamese dish pho!!) and rewrite the equation, we get:

This is the property of the golden ratio (1.618…). This number is the same as its reciprocal plus 1. Now, back to the rectangles. By rearranging the formula above with simple algebra, we get two more equations:

We know for a fact that b ≠ 0 because it is a geometric length, therefore we can infer that

Φ2 - Φ - 1 = 0

For now, replace Φ with x in the equation for easier understanding while solving the equation.

x2 - x - 1 = 0

By using the quadratic formula, the real roots (another word for “solutions”) for the equation are:

44


3 FIBONACCI NUMBERS (Random fact: Leonardo Fibonacci helped a lot in spreading the Hindu-Arabic numeral system [base10] in Europe!) Following on from x2 - x - 1 = 0 let the real solutions x = 1.618 and x = – 0.618 be denoted as a and b respectively, where a > b. Let’s focus on a for now (as it is the Golden Ratio), and since it is the root of the equation, we know that:

α2 - α - 1 = 0 α2 = α + 1

We start to wonder what will happen if we went on and “added” more powers towards α. 3.1 INVESTIGATING POWERS OF ROOT α

α raised to the power of 3 α3 = α2 · α = (1 + α)α = α2 + α = 2α + 1

α raised to the power of 4

α4 = α3 · α = (2α + 1)α = 3α + 2

Brain Teaser 1: Try spotting the pattern! Try it out for yourself and fill in the table below. Hint: they are all integers (answers in section 7.1) αn = Aα + B n

A

B

0

0

1

1

1

0

2

1

1

3

2

1

4

3

2

5

______

______

6

______

______

7

______

______

8

______

______

9

______

______

...

...

...

nth term

______

______

Extension: find the general term!

45


3.2 DEFINITION OF FIBONACCI NUMBERS After (hopefully not) peeking at the answers, you will find that both sequences A and B follow a pattern: a new term is generated by adding the two terms that come before. This “pattern” is exactly the property of the Fibonacci sequence. Furthermore, the sequence starts with two numbers, 0 and 1. Putting this into more mathematical terms, the sequence can be expressed as:

RECURRENCE RELATION (or RECURSION)(1) (see section 7.2) Fn+1 = Fn + Fn - 1 (for F ≥ 1) With INITIAL CONDITION F0 = 0, F1 = 1 3.3 BINET’S FORMULA: EXPLICIT GENERAL TERM OF FIBONACCI SEQUENCE Linking the definition of Fibonacci numbers back to the sequence derived from powers of root , we find that the general term of sequence αn is no more than

Remember our other friend, root β? The same equation can be applied to get the equation:

Now that we have two equations, eliminate Fn-1 then we get:

Substitute the numerical values of α and β to reach the final equation:

*It is strongly recommended that you try out the steps yourself - who wouldn’t love themselves some algebra!* Jacques Philippe Marie Binet was a French mathematician who derived this closed form formula for the general term of Fibonacci [4]. Interestingly, although the formula involves a lot of surds, the final answer will always be a positive integer.

46


4 LUCAS NUMBERS Consider the roots of the quadratic at the very start of Section 3:

x2 - x - 1 = 0, x = α or x = β According to the formula for the sum and product of roots of a quadratic equation ax2 + bx + c, the sum and product of roots are -b/a and c/a respectively so we know that:

α + β = 1, αβ = -1 Now, as young, diligent and proactive mathematicians, of course we wonder what will happen when we, you know, mess with these equations. We already have α + β = 1, now by adding powers α and β separately, we get the equation:

(By the way, solving these equations requires knowledge of the basic 'algebraic identities', which tell us how mathematical expressions relate to each other e.g. a + 0 = a. Some really useful stuff; they will definitely help you in the long-run for maths!) Brain Teaser 2: Investigating Lucas numbers n

αn + β n

n

αn + β n

0

2

6

_____

1

1

7

_____

2

3

8

_____

3

4

9

_____

4

_____

...

...

5

_____

nth term

_____

47


Congratulations if you’ve made it this far without being tempted to peek at the answers! We can see that these numbers follow the same property as Fibonacci numbers: each term is the sum of the two previous terms. Using the recurrence relation from earlier on, we can express this as:

Fn + 1 = Fn + Fn - 1 (for F ≥ 1) With INITIAL CONDITION F0 = 2, F1 = 1 This sequence is called the LUCAS NUMBERS, named after French Mathematician Édouard Lucas who was famous for studying the Fibonacci sequence. I personally enjoy thinking of Lucas numbers as the dark twin of Fibonacci numbers - Lucas numbers are just as valuable, but don’t get much attention. Is there an expression that denotes Lucas numbers with α and β? From the table, we can infer that Ln = αn + βn and we know that α + β = 1, so for symmetry, we can denote:

5 THEIR RELATIONSHIPS 5.1 FIBONACCI NUMBERS AND THE GOLDEN RATIO The recursive relation of Fibonacci is shown with Fn+1 = Fn + Fn-1 , when we divide with both sides we get:

We assume that the ratio of consecutive Fibonacci numbers will approach a fixed numerical value as the value n approaches infinity(3). Now, define another variable and rewrite the equation as:

When a number is equal to its reciprocal + 1, it is in the golden ratio. Therefore, this final expression tells us that as the Fibonacci sequence goes on to the nth term, the ratio between two consecutive numbers will approach the golden ratio Φ. Note that this statement does not just apply to the Fibonacci sequence, any sequence which generates terms from the sum of the previous two terms also fulfill this statement. Therefore, consecutive Lucas numbers are in golden ratio too! [5]

48


Another way that Fibonacci numbers are connected to the golden ratio can be neatly represented by Figure 3, which shows a 'golden spiral' encased in a golden rectangle [6]:

Figure 3: A golden spiral encased in a golden rectangle [6] Surely we have all seen this spiral around at some point in our lives in Mona Lisa conspiracy videos, but what exactly does it mean? In this golden rectangle (yes, remember our friend from the start?), there are squares with side lengths of Fibonacci numbers and quarter arcs are drawn within them. No matter how many more squares we add to the rectangle, the base and height of the rectangle will always be two consecutive numbers that fall somewhere within the Fibonacci sequence, therefore we can deduce that:

Area of golden rectangle = FnFn + 1 Thinking from another perspective, the area of the rectangle is no more than all the areas of the little squares made up of side lengths of Fibonacci numbers added up!

Areas of little squares = Fn2 Simultaneously,

This diagram of a rectangle proves that the summation of squared Fibonacci numbers up to the nth term is just the nth term itself multiplied by the term that comes after. 5.2 FIBONACCI AND LUCAS NUMBERS As mentioned before, the Lucas numbers are a lot like Fibonacci numbers - so are they, in any way, connected? The answer is yes, and in many ways. Below are two ways of how they are connected (because as a Y9 student, I am still incapable of comprehending lots of great works by great mathematicians. However, if you think I have sparked your passion to become a young, diligent and proactive mathematician, feel free to look up more ways on how they are connected!!)

49


1. Consider the general terms of the Fibonacci and Lucas sequences:

We kow that (α + β) is 1, so expand the numerator and we get:

When Fibonacci and Lucas numbers of the same position are multiplied, the product is the Fibonacci number that is twice the position; e.g.: F4L4 = 3 · 7 = 21 = F8 2. Observe the table below: n

1

2

3

4

5

6

7

8

9

10

Fn

0

1

1

2

3

5

8

13

21

34

Ln

2

1

3

4

7

11

18

29

47

76

The highlighted boxes show a pattern that appears throughout the sequence. For every other Fibonacci number, the Lucas number “in the middle” is the sum of the two Fibonacci numbers. This can be expressed as:

Ln = Fn-1 + Fn+1

6 CONCLUSION That wraps it up for the golden ratio, Fibonacci and Lucas numbers! As unconvincing as this sounds, coming from a big nerd that has taken her time to write such a long article about a sequence of numbers adding on top of each other, maths really makes us think about how we take day to day objects in nature for granted. Who knew the arrangement of flower petals (Figure 4) could lead to such elaborate calculations? I hope this article has inspired you to take more time to slow down and observe the world around us - sometimes just an extra glimpse can lead to magnificent discoveries.

Figure 4: The petals of a rose and the golden spiral (Source: https://www.juevesfilosofico.com/the-golden-ratio-2)

50


7 SOLUTIONS + NOTATIONS APPEARING IN THIS ARTICLE 7.1 SOLUTIONS Brain Teaser 1

Brain Teaser 2

αn = Aα + B

n

αn + βn

n

A

B

0

2

0

0

1

1

1

1

1

0

2

3

2

1

1

3

4

3

2

1

4

7

4

3

2

5

11

5

5

3

6

18

6

8

5

7

29

7

13

8

8

47

8

21

13

9

76

9

34

21

...

...

...

...

...

nth term

Ln =

nth term

Fn

Fn-1 *

*There is not really an explicit general term for column B, however it is notable that this column has the exact same terms as column A, except that everything is shifted down by one row (row 1 being the exception, because anything to the power of 0 is 1) so the definition Fn-1works for now. 7.2 NOTATIONS THAT APPEAR IN THIS ARTICLE The recurrence relationship means that “one or more initial terms are given; each further term of the sequence or array is defined as a function of the preceding terms” [6]. Into even simpler words, a term in a sequence is obtained by operating with the term(s) before. One pertinent example would be factorials, because factorials are defined by n! = n(n - 1)! (for n ≥ 1), with the initial condition 0! = 1. If you are interested, feel free to read through a very helpful article here [8].

(1)

In order to solve two unknowns, we usually need a minimum of two equations so we can eliminate at least one variable. Therefore, when we have the variable Fn-1 in two equations, we eliminate it by the method of solving simultaneous equations. There are a few methods of solving simultaneous equations, but the most common ones are elimination and substitution processes. (2)

51


When a variable approaches a number (usually very big or small), this notation is used. This means that variable x is approaching number y.. For example, when we see , we know that n is getting bigger, approaching infinity. There is a function afterwards to indicate what exactly will happen as n approaches infinity: This equation is saying that when n approaches infinity, k will become 1 + its reciprocal, which means that it is in golden ratio.

(3)

Here, we introduce a notation called “summation”, which means adding all terms in a function up to a chosen point. The number below (e.g, 1 in ) indicates the “starting point”, as opposed to the number/expression above indicating to the “stopping point” (e.g. n in ). Putting it all together, means adding from the 1st term to the nth term in a function. (4)

BIBLIOGRAPHY [1] Meisner, Gary. “History of the Golden Ratio.” The Golden Ratio: Phi, 1.618, 26 Aug. 2017, https:// www.goldennumber.net/golden-ratio-history/. [2] Hegde, Pratik. “Golden Ratio : What It Is And Why Should You Use It In Design.” Medium, Prototype, 4 Jan. 2018, https://blog.prototypr.io/golden-ratio-what-it-is-and-why-should-you-use-it-indesign-7c3f43bcf98. [3] “Relationship of Sides in a Golden Rectangle.” Wikipedia, Wikipedia, 29 June 2011, https:// en.wikipedia.org/wiki/Golden_ratio#/media/File:SimilarGoldenRectangles.svg. [4] “Jacques Philippe Marie Binet.” Wikipedia, Wikimedia Foundation, 22 Sept. 2019, https:// en.wikipedia.org/wiki/Jacques_Philippe_Marie_Binet. [5] Chasnov, Jeffrey R. Hong Kong, https://www.math.ust.hk/~machas/fibonacci.pdf [6] “Fibonacci spiral” Wikimedia, 3 May 2017, https://en.wikipedia.org/wiki/Golden_spiral#/media/ File:FibonacciSpiral.svg. [7] “Recurrence Relation.” Wikipedia, Wikimedia Foundation, 26 Oct. 2019, https://en.wikipedia.org/ wiki/Recurrence_relation#Definition. [8] “Recurrence Relations” math.ust.hk, https://www.math.ust.hk/~mabfchen/Math2343/Recurrence.pdf

52


The Atom Through Time Pierce Duffy (Year 13, Sun)

Democritus of Abdera was a Greek philosopher who was known as the “laughing professor” due to his constant cheerfulness and belief that the main goal of life should be happiness. He lived in the 4th century BC and without conducting any experiments, Democritus and his teacher Leucippus were able to theoritise the 'atomic theory' with nothing but deduction and observational skills. He proposed that everything in the world was made up of what he called ‘atomos’, which is a Greek word meaning uncuttable or indivisible; the word ‘atom’ that is ubiquitous today stems from Democritus. In his theory, atoms were the smallest thing in the universe and have existed forever. They are not alive, cannot be destroyed, are constantly moving, infinite in number, and can bind to other atoms. Substances differ according to shape, size and structure of the atoms. Although some of what Democritus theorised was incorrect, a surprisingly large amount was actually correct. What is more remarkable is the fact that he was theorising this all in the 4th century BC, a time when the first crossbow was being invented, the first aqueduct built and the leading scientific belief was that everything was made up of four elements: Earth, Fire, Water and Air! For almost 2200 years this idea of atoms lay cold and untouched. That was until John Dalton, an English Quaker Christian born in 1766, took an interest in the atomic theory. Fascinated by science from a young age, he was just 26 years old when he became a teacher of mathematics and natural philosophy at Manchester’s New College, where he carried out his research on his own atomic theory. His earlier work at the college on gases led him naturally to work more formally on this idea of atoms. In 1808 Dalton published his book, A New System of Chemical Philosophy. In it he borrowed Democritus’ term ‘atom’, and introduced his idea that all atoms of an element are identical, but atoms of different elements have varying atomic weights. This simple idea is the foundation to all of modern Chemistry and it is why Dalton is often called the ‘father of atomic theory’. In his book he shared his belief that atoms are indestructible and the smallest particle. His work alone sparked a wave of interest in atoms, and marked the birth of modern Chemistry. J.J. Thomson was next to change the atomic scene forever, disproving both Dalton’s and Democritus’ beliefs that the atom was the smallest particle. In 1894 Thomson began his experiments with cathode rays tubes filled with gases at low pressures, with an anode and cathode at either end. From the experiments he was able to determine the electrical charge to mass ratio of the particles. He found this ratio was 1000 times smaller and 1800 lighter than a hydrogen atom and did not change even when he used different gases. This effectively disproved that hydrogen was the smallest particle. It also led him to believe there was a particle even smaller than the atom that has a negative charge, and so the ‘corpuscle’ was born, or what we today call the electron. However, this went against everything that was known about the atom at the time: how could there be anything smaller than the atom? Thomson proposed these negatively charged electrons were distributed in a uniform sea of positive charge, which would later be known as the ‘plum pudding model’ (Figure 1).

Figure 1: A depiction of J.J. Thomson’s ‘plum pudding’ model (Source: Wikimedia Commons) 53


Although this model was disproven a few years later, it was a massively important step in atomic theory, as it questioned the atom’s position as the smallest particle in the universe. Thomson encountered difficulties in proving his 'plum pudding' model, in particular demonstrating that atoms have uniform positive charge. The Geiger-Marsden experiment, also known as the 'gold foil experiment', measured the scattering pattern of alpha particles (particles with a positive charge) when shone at a thin sheet of gold. The two scientists expected all the alpha particles to pass through the atom, as according to the 'plum pudding' model, the atom has a uniform positive charge which would not be large enough to repel an alpha particle. The majority of the alpha particles passed through as expected, but a very small fraction of the particles (around 1 in 20,000) were scattered at angles greater than 90º (Figure 2).

Figure 2: The Geiger-Marsden experiment (Source: Pearson) If the atom’s charge was uniformly distributed throughout the atom, then the charge of an atom would not be sufficient to cause the kind of repulsion that would scatter the alpha particles to such a degree. In his 1911 paper, Rutherford presented his own model of the atom, referencing the unexpected results from the gold foil experiment. Rutherford also described a very small (less than 1/3000th of the atomic diameter), and highly charged centre, which would later be known as the nucleus. It took one more scientist, James Chadwick, to prove that the nucleus of an atom consisted of two different particles: protons with a positive charge, and neutrons with no overall charge. At this point in time, scientists now knew that the atom was made up of smaller subatomic particleselectrons, protons and neutrons, but what was still unknown was the exact arrangement of these inside the atom. The Rutherford model was a good start but it was refined by Niels Bohr using subsequent experiments a few years later. He built upon the idea of a nucleus in the centre of the atom, and theorised that electrons orbit around this centre in a solar system-like model. To this day, Bohr's model of the atom is taught in schools up until Year 11, and is what most people think of when someone says the word ‘atom’. Although Bohr’s model fixed problems relating to the Rutherford model, it did have its own shortcomings, such as the assumption that electrons have a known radius and orbit. In an attempt to keep his model alive, adjustments to the model were made, which proved to be even more problematic. In the end, it would be superseded by quantum theory. The model that is accepted today as the most accurate model is called the electron cloud model. Heisenberg's 1927 uncertainty principle and Erwin Schrödinger’s 1926 work on the wave function and 54


electrons revealed that in the world of quantum mechanics, it is impossible to determine the exact position and momentum of an electron at any given time. With this information, the electron cloud model shows that electrons do not orbit the nucleus, but rather they move about in a region around the nucleus. You cannot predict the exact position of these electrons, only a cloud-like region where electrons are most likely to be found, hence the term ‘electron cloud’. Regions with the highest probabilities of an electron being present are known as electron orbitals, with each orbital having a different shape, amount of energy and number of electrons (Figure 3). This theory came about in the 1920s and has stood the test of time for over 90 years. Some might say that this must be it: the most accurate, most correct model of the atom.

Figure 3: A comparison between Bohr’s model of the atom and electron cloud model proposed after discoveries in quantum mechanics (Source: TracingCurves) However, I would remind them that for over 2000 years, most people believed that the world was made up of the 4 elements and did not accept Democritus' atomic theory. Perhaps in the future, the electron cloud model that we currently 'believe' in will be rendered inaccurate... only time will tell.

BIBLIOGRAPHY Williams, Matt. “What Is The Plum Pudding Atomic Model?” Universe Today, 6 Dec. 2016, www. universetoday.com/38326/plum-pudding-model/. Ingraham, Paul. “John Dalton.” Biography.com, A&E Networks Television, 15 May 2019, www. biography.com/scientist/john-dalton. Williams, Matt. “What Is Bohr's Atomic Model?” Universe Today, 6 Dec. 2016, www.universetoday. com/46886/bohrs-atomic-model/. Doud, Theresa, director. The 2,400 Year Search for the Atom . YouTube, YouTube, 2014, www.youtube. com/watch?v=xazQRcSCRaY. "Discovery of the Electron." PBS. PBS, n.d. Web. 08 Jan. 2014. Hawkins, Stephen W. A Brief History of Time. New York: Bantam, 1988. Print.

55


Discovery consists of seeing what everybody has seen and thinking what no one has thought. - Albert Szent-Gyรถrgyi

56


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.