The Scientific Harrovian - Issue 5, June 2020

Page 1

SCIENTIFIC HARROVIAN

ISSUE

V

2019-20

HARROW INTERNATIONAL SCHOOL HONG KONG

'S C I EN C E A N D T H E S EN S E S'

BIOLUMINESCENCE:

THE MYSTERY BEHIND THE LIGHT

TH E CH ICKEN MCNU G G ET THE OREM

SYNESTHESIA ROBOTICS & ARTIFICIAL INTELLIGENCE:

H O W FA R C A N A N D S H O U L D A I TA K E U S ?

T H E TA L E O F :

T H E G O L D E N R AT I O , FIBONACCI NUMBERS & LUCAS NUMBERS

33


MESSAGE FROM MRS CLIFFE What a fabulous edition of the Scientific Harrovian! My congratulations go to all the writers, editors and illustrators, and of course to Stephenie Chen who has helmed this project with confidence, clarity and calm. In the midst of School closures and a pandemic, it has been inspiring to see pupils who are committed to making the best use of their time finding out more about the world around them and crafting their findings into an article which is accessible to their peers. The printed edition combines articles that have been written throughout the year, by individuals ranging from Year 6 to Year 13, and showcases the depth and breadth of our pupils' curiosity and enthusiasm. The expansion of the theme from 'Seeing Science' to 'Science and the Senses' has resulted in pupils looking at the crossover of senses in Synesthesia and the idea of our sixth sense 'Proprioception', and as there has been less opportunity for pupils to carry out their own experiments, it has sparked more articles on scientific theories. I hope that next year our pupils will be able to get back into the lab to partake in the adventure of 'sciencing', and report on their own findings. As I move on to new ventures at the end of this academic year, I can rest assured that the Scientific Harrovian will continue to flourish, alongside the increasing number of Harrow Hong Kong's student-led publications. It has been a privilege to be a part of a project that empowers pupils and inspires its readers to see (or sense) the world in new and different ways. My hope is that effective communication of Science will remind people of the awe and wonder for both creation and human ingenuity that can be so easily lost, and this will make us more mindful of the impact of our actions and choices, as we strive towards Leadership for a Better World.

Mrs E. Cliffe

Editor, Head of Scholars Assistant House Mistress & Teacher of Chemistry

ABOUT THE SCIENTIFIC HARROVIAN The Scientific Harrovian is the Science Department magazine, which allows scientific articles of high standard to be published. In addition, the Scientific Harrovian is a platform for students to showcase their research and writing talents, and for more experienced pupils to guide authors and to develop skills to help them prepare for life in higher education and beyond. Guidelines for all articles All articles must be your own work and must not contain academic falsities. The articles must be factually correct to the best of your knowledge. The article must be concise, with good English, and it must be structured appropriately. Any claim that is not your personal findings must be referenced. Joining the Team Is there something scientific that excites you and that you’d like to share with others? Will you commit to mentoring budding Science writers? Do you have graphic design skills? Our team may have just the spot for you. Email the current Staff or Student Editor-in-Chief to apply for a position or for further information.


WELCOME! It’s slightly mind-blowing to realise that we’re all living our own versions of reality Everything we ever experience is through our senses and interpretations, and thus we don’t know, and will never be able to know, true reality. The theme of Issue V, ‘Science and the Senses’, aims to explore the different ways in which we are able to perceive the world around us, and to appreciate our bodies for being able to do so. Realizing that we are so limited in our abilities makes it even more paramount to appreciate the beauty of the world; this is even more important now that we realize how fragile humanity is, with a single, microscopic virus resulting in the fall of entire economies. We have included a new (!) section titled ‘Concepts in Science’, since we realized that students were gravitating towards wanting to explain these mystifying concepts underlying Science and Mathematics. These articles are accessible and genuinely show each student's love for the topic they are writing about. Finally, this year’s edition is dedicated to the woman who made it all possible: Mrs Cliffe, who is moving on next year to Crossroads. None of this would have happened without her unwavering support, guidance, and reassurance when nothing was going as expected. So thank you, to the woman who dedicated her time to continuing the Scientific Harrovian after Mrs Smith left, and whose passion for sharing science resonates through this entire edition (and the Science department too - be sure to check out Mrs Cliffe’s space dress!). You and your relentless optimism will be missed. I hope you take the time to appreciate all the hard work that the contributors have put into this. Happy exploration (of the Scientific world)! Yours sincerely, Stephenie Chen Y12, Gellhorn, Editor-In-Chief

Copyright Notice Copyright © 2020 by The Scientific Harrovian. All rights reserved. No part of this book or any portion thereof may be reproduced or used in any manner whatsoever without the express written permission of the publisher, except for the use of brief quotations in a book review. 5


CONTRIBUTORS

Callum Begbie

Haley Chan

Jasmine Chan Year 12, Wu Author & Editor

Year 9, Gellhorn Editor & Illustrator

Iris Cheung

Pierce Duffy Year 13, Sun Author

Diya Handa

Year 12, Anderson Author

Bernice Ho

Valerie Ho

Jasmine Hui

Annie Kim

Ryan Kong

Nicole Lau

Selyn Lim

Year 6, Darwin Illustrator

Year 12, Gellhorn Author

Year 8, Parks Editor

Year 8, Nightingale Editor

Year 12, Wu Editor

Year 12, Wu Illustrator

Year 11, Wu Editor

Amber Liu

Year 11, Wu Illustrator

Year 12, Keller Editor

6

Joy Chen

Year 6, Fry Author

Year 12, Peel Editor

Helen Ng

Year 9, Gellhorn Author


Reika Oh

Joaquin Sabherwal

Jarra Sisowath

Kayan Tam Year 12, Wu Illustrator

Year 10, Keller Editor & Illustrator

Year 13, Peel Author & Editor

Edward Wei

Vincent Wei Year 13, Peel Editor

Hoi Kiu Wong

Year 12, Wu Author & Illustrator

Jasmine Wong

Josiah Wu

Michelle Yeung

Samantha Yeung

Year 10, Gellhorn Illustrator

Year 12, Gellhorn Editor

Year 10, Peel Author

Year 12, Churchill Author

Callum Sharma

Year 6, Shackleton Author

Year 11, Churchill Author & Editor

Emily Tse

Year 12, Keller Editor

Year 8, Fry Editor

7

Dylan Sharma

Year 10, Churchill Author

Mike Tsoi

Year 10, Keller Author

Audrey Yuen

Year 10, Anderson Editor


CONTENTS

'SCIENCE AND THE SENSES'

Pigments .............................................................................................................................................. 10 Jasmine Chan The Mysterious Sixth Sense of Human Beings: Proprioception ......................................................... 22 Bernice Ho Synesthesia .......................................................................................................................................... 27 Jasmine Wong The Nocebo Effect ............................................................................................................................... 32 Edward Wei Visionaries For Vision ......................................................................................................................... 37 Joaquin Sabherwal Bioluminescence: The Mystery Behind the Light ............................................................................... 42 Hoi Kiu Wong

CONCEPTS IN SCIENCE Resonance ............................................................................................................................................ 55 Mike Tsoi The Chicken McNugget Theorem ....................................................................................................... 64 Josiah Wu Blackbody Radiation and Planck's Constant ....................................................................................... 71 Edward Wei The Tale of: The Golden Ratio, Fibonacci Numbers, and Lucas Numbers ......................................... 75 Helen Ng The Atom Through Time ..................................................................................................................... 86 Pierce Duffy

APPLICATION OF SCIENCE Role of Natural Gas in Energy Transition ........................................................................................... 90 Diya Handa Effects of Climate Change on Plant Growth ....................................................................................... 97 Dylan Sharma Cut and Paste Genes ...........................................................................................................................100 Callum Sharma Could Stem Cells Be The Next Breakthrough in Medicine? .............................................................104 Iris Cheung Robotics and Artificial Intelligence: How Far Can and Should AI Take Us? ....................................110 Jett Li Bitcoin Explained ...............................................................................................................................121 Josiah Wu 8


SECTION 1

SCIENCE AND THE SENSES 99


Science and the Senses: Pigments Jasmine Chan (Year 12, Wu)

Pigments are everywhere. They can be found in the fabric of clothes that you are currently wearing, the hair on top of your head and even the food that you consume. As a person who loves colour, the realm of pigments fascinates me. In this article, I am going to introduce you to the basics of pigments, how they are classified and their biological applications. 1 INTRODUCTION TO PIGMENTS 1.1 What are pigments? Pigments are chemical compounds that are coloured, black or white [1]. In nature they usually have specific functions, but we make extensive use of pigments in many industries to add colour to objects. Pigments are usually insoluble solids but in order to change the surface colour of an object easily, a liquid form would be desired. Due to their insolubility, a solution cannot be formed and therefore, a mixture containing solid particles in a liquid is used. When the pigments are incorporated, they remain unaffected, chemically and physically [2, 3]. However, there are some cases where pigments are used in their solid form, such as eyeshadows in the cosmetics industry. 1.2 Organic and Inorganic Pigments Organic pigments are made from natural sources, such as plants. Organic means that one or more carbon atoms are present in the pigment molecules, whereas inorganic products do not contain any carbon atoms. Organic pigments also usually contain small amounts of sulphur and nitrogen atoms [4]. There are three types of organic pigments: ‘carbon’ pigments (eg. lamp black pigment which is made from the products of incomplete combustion of oils), ‘lake’ pigments (combining anionic dyes with metallic salts) and ‘non-ionic’ pigments (eg. azo pigments) [5]. The largest group of organic pigments is azo pigments which contain one or more azo groups (-N=N-) which form red, orange and yellow pigments [6]. Inorganic pigments are usually made from the oxidation of chemicals that are not sourced from plants. One example is ‘titanium white’; these white pigments are known as white extenders which not only provide opacity and lighten the colour of other substances but also improve their properties, such as durability, and strength [3, 4]. When comparing organic and inorganic pigments, inorganic pigments have a much larger average particle size than organic pigments which allows inorganic pigments to be more opaque. As organic pigments have a larger surface area to volume ratio, they provide higher colour strength (better ability to provide a colour of higher intensity to other

Figure 1: List of organic and inorganic pigments [7] 10


materials). [7, 8] However, inorganic pigments are longer-lasting than organic pigments. In addition, organic pigments are much safer to use, as inorganic pigments may cause serious side effects on the human body [3, 4]. 1.3 Synthetic Pigments The vast majority of pigments, both organic and inorganic, are synthetic. This means they are manufactured or processed from raw materials. Pigments are either produced by processing chemicals such as acids and petroleum compounds under intense heat/pressure or are manufactured using other minerals to mimic the natural form of pigments [9]. Organic pigments are derived from natural substances on earth (eg. plants) which means that the colour of the pigments are earthy and nature-like. However, due to the processing of these pigments, unnatural colours (which cannot be seen in nature) can be created. One example of organic and synthetic pigment would be ultramarine blue and synthetic ultramarine blue. Natural ultramarine blue is derived from a gemstone known as lapis lazuli (Fig. 2). However, synthetic ultramarine blue is made by combining alumina, silica, sulphur and soda under high temperatures. Since the ingredients are the main elements of lapis lazuli, it makes the synthetically produced pigment almost chemically identical to the natural one. In this case, since lapis lazuli has a crystalline structure, the pigment creates more depth than the one that is synthetically made [10].

Figure 2: A picture of lapis lazuli and ultramarine blue pigment (Source: eBay)

1.4 What gives a pigment its specific colour? Pigments are chemical substances that only reflect a small range of wavelengths of the visible light spectrum. This gives the pigment its specific colour, as the wavelengths of light reflected determines what colour humans perceive (Fig. 3) [46]. Different pigments have different substances in them to give their colour [3]. For black pigments, due to their organic nature, they contain carbon. As carbon is black, it gives the pigments a black shade. For white pigments, a variety of substances can give them a white shade such as titanium dioxide (TiO2), calcium carbonate (CaCO3), calcium sulphate (CaSO4) and diatomaceous earth (SiO2). Iron oxides such as ochres, siennas and umbers create brown pigments which have hints of yellows and oranges. Different compounds of chromium are used to create chrome yellows (lead chromate - PbCrO4) [11], oranges (mixture of lead chromate and lead molybdate - PbMoO4) [12] and greens (chromium sesquioxide - Cr2O3) [13], whereas different compounds of cadmium are used to create brilliant reds and oranges (mixture of cadmium sulphide - CdS and cadmium selenide sulphide Cd2SeS) [14, 15] and yellows (mixture of cadmium sulphide and zinc sulphide - ZnS). [16] Blues such as Prussian blue (iron (II, III) hexacyanoferrate (II, III) - Fe7C18N18) [17] and ultramarine blue (sodium sulphide aluminosilicate - Na2OSAl2O3SiO2) [18, 19] are made using compounds of iron.

Figure 3: A Visual Representation of Wavelengths of Light and Its Corresponding Colour (Source: VectorStock) 11


2 NAMING PIGMENTS Pigments have a specialised naming system called the Colour Index. This system is regulated and standardised by the Society of Dyers and Colourists and the American Association of Textile Chemists and Colorists [20]. This is very important for people working with colour, especially for artists that want to know the appearance of the colour in the paint tube and how they mix with other colours [21]. This Colour Index system is necessary as paint companies tend to name the same pigment with different names. For example, phthalocyanine blue can be called phthalo blue, Winsor blue, or bocour blue even though they are the same pigment [22]. The colour index uses a dual classification system. One system is called the Colour Index Generic Name (CIGN) which is more common and easier to remember, and the other system is called Colour Index Constitution Number (CICN) which links back to the chemical structure of the pigments [23]. 2.1 Colour Index Generic Name (CIGN) Before this system was created, pigments were named after their technical chemical name. One example would be quinophthalone yellow. However, with the CIGN, it is now known as PY138 [24]. The alphabets at the beginning of CIGN are to describe the general colour of the pigment (Fig. 4). The P represents the word ‘pigment’ and the rest of the alphabets represents the colour. The number(s) in the pigment code is the individual pigment identifier which is assigned for commercial use. However, if the pigment is no longer manufactured in the industry, the number would be withdrawn. Currently, PY40 refers to a yellow pigment called aureolin as it is the 40th entry in the list of yellow pigments. However, due to the deletion of particular pigments throughout the years, the numbering of these pigments may not be consecutive [22].

Figure 4: Pigment codes and the corresponding colour [22]

2.2 Colour Index Constitution Number (CICN) The Colour Index Constitution Number originally started with five-digit numbers. In the beginning, the numbering of these pigment colours was in multiples of five to ensure space for future pigments that are chemically similar. However, since 1997, there have been more pigments which are added to the CICN than anticipated. To alleviate congestion, new pigments have six-digit numbers [25]. The numbers in the CICN correspond to a main chemical class (Fig. 5). Colon numbers are used to further divide both CIGNs and CICNs for more identification of the pigment’s chemical properties and structure [23]. For example, PR48 (C.I. 15865) uses sodium salt whereas PR 48:1 (C.I. 15865:1) uses barium salt during formation.

3. CHARACTERISTICS OF PIGMENTS 3.1 Lightfastness Lightfastness (also known as the permanence of colour due to light) is the chemical stability of a pigment under the exposure to light for a long duration of time. Light is a source of energy which can cause the colour to alter due to chemical changes in pigments. For example, over time, the pigments may become desaturated, tinted (whitened), shaded (darkened) or even completely disappear. Lightfastness testing was first developed by the American Society for Testing and Materials (ASTM) which conducted these tests with various lights such as sunlight, fluorescent UV lamps, cool white 12


Figure 5: Table of CIGNs and the corresponding main chemical class [26] fluorescent etc. However, these experiments were inaccurate as these tests often exposed paint samples to an intense radiation of light which caused a quick change in the appearance of the pigments. Moreover, when testing, the amount of light was unknown or was not kept constant. In order to solve this problem, the Association of Textile Chemists and Colorists (AATCC) developed a ‘blue wool scale’ (Fig. 6). The scale uses two blue wool textile fading cards which each consist of eight strips of blue wool that fade at different rates [20]. Ultraviolet radiation in light causes the pigment in blue wool to fade. All of the blue pigments selected for the scale differ as each pigment takes 2-3 times longer to begin fading compared to the pigment selected above. This means that the strip on the top has the least permanence and the strip on the bottom has the most permanence (Fig. 7) [27].

Figure 6: Blue Wool Scale [27] 13


Figure 7: Strips on the Blue Wool Scale Corresponding to the Time Taken to Fade [20]

Figure 8: Blue Wool Scale. Left: exposed to 800 hours of UV Right: exposed to no light [28]

One cardscale is exposed to a specific wavelength of ultraviolet and the another is exposed to no light [20] (Fig. 8). Two samples are also made for the pigment that wants to be tested. One of the samples is also placed in the dark and the other exposed to ultraviolet light at the same time as the blue wool scale. When a reference pigment in the blue wool scale begins to discolour, all the other pigments that have started to discolour after that point are rated as having the level of lightfastness as the reference pigment. The pigment is then given an ASTM (American Society of Testing and Materials) rating (Fig. 6). 3.2 Tinting Strength Tinting strength (or known as tinting power) is the measure of the relative power of colouring of a pigment in a form of a paste [30]. A tint test is used to show the ability of a pigment (in paste form) to maintain its strength when mixed with a white pigment paste. The test reveals the ability of the pigment to maintain its strength (its intense colour) while using the same volume of white paste. The darker/more vibrant the colour is at the end of the test, the higher the tinting strength of the pigment (see Fig. 11). For example, when comparing phthalocyanine blue with ultramarine blue, phthalocyanine blue has 40 times higher tinting strength than ultramarine blue. This means that if phthalocyanine blue were to be used in high concentrations, it would completely overpower a larger variety of colours mixed with it compared to ultramarine blue [31, 32]. Figure 10: Visual Example of Phthalocyanine (Top) and Ultramarine Blue (Bottom) (Source: Pigment Tokyo)

Tinting strength is always determined relatively, meaning that there is no absolute scale. Hence, when a tint test is performed, information of the medium used must be recorded. A reference pigment would be selected and it is labelled as 100%. Other pigments compared to the reference pigment can have a higher or lower percentage than 100% depending on the tinting strength (the lower the number, the lower the tinting strength) [33].

The tinting strength values for two similar pigments can also be used to calculate the concentration of the pigment required to give the same intensity. If the pigments have a high tinting strength such as phthalocyanine, companies would most likely reduce the concentration within the application substance as a high concentration is not required [30].

14


Figure 11: Example of a Tint Test with 12 Cadmium Reds from Different Brands [32]

Figure 12: An Example of Refraction [34]

3.3 Hiding Strength Hiding Strength (or known as opacity) is the covering power of pigment when applied to another substance. The opacity of pigment varies due to its chemical structure and size of particles [33]. The hiding strength is heavily influenced by the refractive index and particle size of the pigment. 3.3.1 Refractive Index Different pigments can absorb different wavelengths of light to create ‘colour’. When light travels through a substance, its velocity is slower, causing the beam of light to be refracted. The refractive index is calculated by the formula of n = (sin i) / (sin r). n represents the refractive index, i is the angle of incidence (the angle between the light beam entering the substance and the line perpendicular to the surface of the substance) and r is the angle of refraction (the angle between the refracted beam of light and the line perpendicular to the surface) (Fig. 12). As the refractive index increases, more of the light is scattered, meaning that opacity increases. Though this is the case for many pigments, there is one special case for this relationship which are white pigments. White pigments have little to no absorption of light. This means that the hiding strength of white pigments completely relies on the scattering of the incident light. As mentioned in 1.2, inorganic pigments have a higher hiding strength than organic pigments. This is because inorganic pigments have a higher refractive index [35]. 3.3.2 Particle Size The particle size of a pigment is the average size of the particles in the pigment. Most synthetic pigments are manufactured in a range of particle sizes to ensure they can be used for different applications. As the particle size decreases, the tinting strength increases. This is because small particle sizes have a large surface area to volume ratio which produces a more intense colour [32].

15


3.3.3 Dispersibility Dispersibility is the measure of how easy a pigment can spread/distribute across an area when in a liquid medium. Some pigments are more susceptible to clumping due to the electrostatic cling of their particles. Due to the clumping, the hiding strength decreases, since the dispersity increases [32].

4 TOXICITY OF PIGMENTS Pigments must be handled with care during usage as some are extremely poisonous and may cause serious damage to the health of the user [36]. 4.1 Exposure to Pigments The toxicity of pigments varies depending on the type of exposure: inhalation, ingestion and absorption or through the skin. 4.1.1 Inhalation All pigments in solid form are hazardous as they can travel into and irritate the lungs. When handling pigments, it is recommended to use a respirator mask designed to prevent toxic dust for safety measures. In addition, if the user smokes while handling the pigments, the pigments may be inhaled through the cigarettes which can be extremely dangerous [37]. 4.1.2 Ingestion While working with pigments, traces of the pigments may end up on the skin of the user. If the pigment is not washed away properly, the residue of pigments may be ingested while eating. In order to lower the chance of consuming pigments, food or drink should not be in the same room as pigments to ensure not even a trace of pigment can land on the food or drink. It is also highly recommended to wash hands thoroughly after handling pigments [37]. 4.1.3 Absorption Through Skin Pigments can enter the skin through cuts and scratches in the skin. Some pigments can cause inflammation (dermatitis) if in contact or other allergic reactions. In order to protect the entry of pigments to exposed wounds, it is recommended to use a bandage or any seals to cover the cuts [37]. 4.2 Toxicity Levels Toxicity of pigments is evaluated by the Art and Creative Materials Institute (ACMI) alongside various independent toxicologists. Tested pigments would have either an ‘AP’ (approved product) or a ‘CL’ (toxic) label [36, 37]. Pigments can be classified as ‘highly toxic’ (serious injury/death), ‘moderately toxic’ (permanent minor injury), ‘slightly toxic’ (temporary minor injury) or ‘non-toxic’ (no detectable injury) [36, 38]. 4.3 What causes the pigments to be toxic? Some pigments contain elements/substances that are toxic, which causes the pigment itself to be toxic as well [36, 38]. 4.3.1 Antimony (Sb) Antimony is a toxin that has a moderate to high toxicity. It causes the eyes and respiratory tract to be irritated when in contact. It also affects the functioning of enzymes which often causes indigestion in the user. In extreme exposures, the worst-case scenario would be respiratory failure which may lead to death. 4.3.2 Arsenic (As) Arsenic is an element that has high toxicity and is a suspected carcinogen. It is corrosive and it can affect the peripheral nervous system. In addition, in large quantities, it may cause lung cancer and kidney damage. 16


Figure 12: Pictures of metals in Toxic Pigments [Source: periodictable.com]

Figure 13: Table of Highly Toxic Pigments [40, 41]

4.3.3 Cadmium (Cd) Cadmium is a suspected carcinogen and has high toxicity. It also causes the respiratory tract and eyes to be irritated when in contact. When ingested, symptoms such as abdominal cramps and extreme nausea can happen. Overexposure to cadmium is associated with lung and prostate cancer. 4.3.4 Chromium (Cr) Chromium is a suspected carcinogen as well as an element that has moderate to high toxicity. Symptoms include dermatitis, respiratory irritation, severe enteritis (inflammation of the intestine). 4.3.5 Lead (Pb) Lead has a high toxicity and is a reproductive toxin. It can cause anaemia, gastroenteritis, nervous system damage and many more serious effects. If a pregnant woman has been in contact with lead, it may affect the neurological development of the foetus. 4.3.6 Mercury (Hg) Mercury has a high toxicity. When a large amount of mercury is inhaled, it leads to respiratory irritation and pulmonary edema (excess fluids in the lungs).

5 BIOLOGICAL PIGMENTS Apart from pigments that are used to add colour to objects such as paints, there are a variety of pigments that are located in plants and animals to aid specific function in these organisms. Biological pigments, as known as biochromes, can be produced in the organism or introduced from the environment. [42] 5.1 In Plants Some plant biochromes are used to provide colour, such as the colour on the petals of flowers, but the most important pigments in plants are the ones controlling photosynthesis. Photosynthesis is a vital process in plants for growth and development. These specific pigments are known as photosynthetic pigments [1, 43]. 17


5.1.1 Photosynthetic Pigments Photosynthesis is the process in which plants (and some autotrophs such as algae) capture the energy of sunlight to convert carbon dioxide and water into glucose and oxygen. However, since pigments can only absorb a narrow range of wavelengths, there is more than one type of pigment needed in order to maximise the light absorbed from the sun. [1] 5.1.1.1 Chlorophyll Chlorophylls are green pigments that are found in chloroplasts. Chlorophylls are visually green as when light travels towards it, green light is reflected whereas the other wavelengths of light are absorbed. This makes plants appear green to the eye. The pigment molecule has a central magnesium atom which is surrounded by a porphyrin ring (a structure which contains nitrogen). The porphyrin ring allows electrons to be free to move around due Figure 14: Structure of Chlorophyll [Source: Wikipedia] to its shape. The ring has the ability to gain or lose electrons which can provide electrons to other substances. This enables the chlorophyll to trap light energy from the sun to allow the plant to photosynthesise for growth [1, 44]. 5.1.1.2 Carotenoids Carotenoids are red, orange or yellow pigments that are synthesised in plastids in the plant cells. Carotenoids cannot convert light energy into chemical energy like chlorophyll. Instead, they pass on the absorbed energy to chlorophyll and are hence known as accessory pigments [1, 45]. Carotenoids also help to protect plants from excess light, since overexposure of light can destroy proteins in the cells [47]. During fall, leaves start to turn into a warm reddish colour. This is because chlorophyll breaks down and allows the carotenoids to be more visible [45].

Figure 15: General Structure of Carotenoids (Source: Wikipedia)

5.1.2 Other Plant Biochromes Aside from photosynthetic pigments, there are other pigments present in plants.

5.1.2.1 Flavonoids Flavonoids are vibrant pigments that are found in plants and fruits. [48] These pigments are a visual signal for pollinators such as bees to locate the plant or for organisms who consume fruits to disperse the seeds, both for reproductive reasons [49]. 5.1.2.2 Betalain Betalain also plays an important role in pollination, just like flavonoids. However, they are only found in a niche group of plants such as beetroot. Betalain has two subgroups: betacyanin and betaxanthin. Betacyanin gives a deep red-violet colour, whereas betaxanthin gives a yelloworange colour [49]. 5.2 In Humans There is a variety of pigments which are vital in order to aid humans in daily activities. Some pigments are extremely important for survival, but some can have no function in the body. 18

Figure 16: General Structure of Flavonoids (Source: Wikipedia)

Figure 17: Structure of Betalain (Source: Wikipedia)


5.2.1 Melanin Melanin is the main pigment found in the skin of humans. It is a yellowbrown pigment made by melanocytes in order to protect humans from overexposure to the sun. Melanin is found near the surface of the skin to absorb dangerous ultraviolet wavelengths from penetrating the skin layer. Without melanin, skin cancer can develop. Melanin is also found in the hair and eyes of humans. The different proportions of melanin in the hair and the iris of the eyes determine its colour. As the concentration of melanin increases, the darker the hair/ iris colour would be [49].

Figure 18: Structure of Melanin (Source: Wikipedia)

5.2.2 Haemoglobin Haemoglobin is a red pigment that is found inside red blood cells to carry oxygen for respiration. Respiration is a vital process in humans for energy. Blood in humans has a red colour due to the presence of haemoglobin. Each haemoglobin has four polypeptide chains: two alpha chains and two beta chains. Each chain contains a haem group which has an iron ion. The iron allows the haemoglobin to bind with Figure 19: 3D Image of Haemoglobin oxygen [49]. (Source: ANSTO)

5.2.3 Pigments in the Eye There are pigments present in the eye which are essential for humans to see. This includes rhodopsin and iodopsin. 5.2.3.1 Rhodopsin Rhodopsin is a pigment found in the photoreceptive rod cells which is in the retina of the eye. It allows humans to see in dim light and see shades of gray by converting light into an electrical signal. In a bright area, rhodopsin undergoes structural changes, resulting in electrical signals being sent to the brain, and is regenerated when the area is dark [49, 50].

Figure 20: Diagram of the Eye (Source: everdaypsych.com)

5.2.3.2 Iodopsin Iodopsin is a pigment that is also located in the retina of the eye, but in the photoreceptive cone cells (instead of rods) to perceive colour. In the human eye, there are three types of cone cells: S cones, M cones and L cones. S cones are sensitive to short wavelengths of light (eg. blue), M cones are most responsive to medium wavelengths of light (eg. green) and L cones are most sensitive to long wavelengths of light (eg. red) [49]. Figure 21: Visual Imagery of a Rod and

Different Types of Cones (Source: NCBI)

CONCLUSION

Pigments are fascinating substances. They are so ingrained in our lives that they often go unnoticed, but we would not be able to see or perceive colour if they did not exist. Pigments are extremely important to the survival of all organisms, as well as to the beauty of our surroundings, making our lives more lively and exciting. To conclude this article, I would like to thank pigments for all they have done to make our lives easier and better.

19


BIBLIOGRAPHY [1] “Photosynthetic Pigments.” UC Museum of Paleontology, UCMP Berkeley, 9 July 1997, ucmp.berkeley.edu/glossary/gloss3/pigments.html [2] “Difference between Organic Pigments and Inorganic Pigments.” Koel Colours Blog, Koel Colours Private Limited, 8 May 2018, www. koelcolours.com/blog/pigments/difference-organic-pigments-inorganic-pigments/ [3] The Editors of Encyclopaedia Britannica. “Pigment.” Encyclopædia Britannica, Encyclopædia Britannica, Inc., 25 Mar. 2019, www.britannica. com/technology/pigment [4] “Understanding Organic and Inorganic Pigments and Their Areas of Applications.” Vipul Dye-Chem Ltd., 24 May 2017, vipulorganics.com/ blog/2017/05/24/understanding-organic-and-inorganic-pigments-and-their-areas-of-applications/ [5] “Colouration with Pigments.” Fundamentals and Practices in Colouration of Textiles, Second Edition, by J. N. Chakraborty, WPI Publishing, 2015, pp. 202–213 [6] “Additives.” Handbook of Thermoplastic Elastomers, by Jiri George Drobny, Elsevier, 2014, pp. 17–32 [7] “Pigments.” Edited by Jennifer Moore-Braun, BASF SE, www.dispersions-pigments.basf.com/portal/basf/ien/dt.jsp?setCursor=1_561069 [8] NeelimaNarendra. “Computation of Colour Strength.” Chromatic Notes, 19 May 2013, drnsg.wordpress.com/2013/05/19/computation-ofcolor-strength/ [9] Saitzyk, Steven. “Types of Pigments.” True Art Information, 30 June 2013, www.trueart.info/?page_id=520 [10] MacEvoy, Bruce. “Synthetic Organic Pigments.” Handprint, 8 Jan. 2015, www.handprint.com/HP/WCL/pigmt1d.html [11] O'Hanlon, George. “Artists Materials - Chrome Yellow: A Primary Color with a Brief History.” Natural Pigments Inc., 23 Sept. 2013, www. naturalpigments.com/artist-materials/chrome-yellow-paint/ [12] “Orange Molybdate Pigment.” Natural Pigments Inc., www.naturalpigments.com/orange-molybdate-pigment.html [13] “Chromium Oxide Green.” Natural Pigments Inc., www.naturalpigments.com/chromium-oxide-green.html [14] “Cadmium Red Pigment.” Natural Pigments Inc., https://www.naturalpigments.com/cadmium-red-pigment.html [15] “Cadmium Orange Pigment.” Natural Pigments Inc., https://www.naturalpigments.com/cadmium-orange-pigment.html [16] “Cadmium Yellow Pigment” Natural Pigments Inc., https://www.naturalpigments.com/cadmium-yellow-pigment.html [17] “Prussian Blue Pigment” Natural Pigments Inc., https://www.naturalpigments.com/prussian-blue-pigment.html [18] “Ultramarine Blue (Red Shade) Pigment” Natural Pigments Inc., https://www.naturalpigments.com/ultramarine-blue-red-shade.html [19] “Ultramarine Blue (Green Shade) Pigment” Natural Pigments Inc., https://www.naturalpigments.com/ultramarine-blue-green-shade.html [20] MacEvoy, Bruce. “Labeling, Lightfastness & Toxicity.” Handprint, 8 Jan. 2015, www.handprint.com/HP/WCL/pigmt6.html#lightfast [21] Caves, Julie. “Pigments and Colour Names.” Jackson's Art Blog, 9 Mar. 2016, www.jacksonsart.com/blog/2015/01/14/pigments-colour-names/ [22] Myers, David. “Paint and Pigment Reference Table Key.” Artiscreation, www.artiscreation.com/pigment_key.html [23] GH. “Introduction to the Colour Index™ : Classification System and Terminology.” Society of Dyers and Colourists, Apr. 2013, colour-index. com/introduction-to-the-colour-index [24] “Paliotol™ Yellow K 0961 HD.”, BASF SE, www2.basf.us/additives/pdfs/Paliotol_Yellow_K0961HD.pdf [25] GH. “Chemical Constitutions in the Colour Index.” Society of Dyers and Colourists, Sept. 2013, colour-index.com/cicn-explained [26] “CICN Groups & Sub-Groups.” Society of Dyers and Colourists, colour-index.com/cicn-groups-sub-groups [27] “The Blue Wool Scale.” Materials Technology Limited, www.drb-mattech.co.uk/uv%20blue%20wool.html [28] Johnson, Nicholas. “Blue Wool Lightfastness Standard References.” Tanguay Photo Mag, 10 Apr. 2019, www.tanguayphotomag.biz/digitalprinting/blue-wool-lightfastness-standard-references.html [29] Jürgens, Martin C. ASTM and Lightfastness of Media. Getty Publications, 2009, www.artcons.udel.edu/mitra/Documents/ASTM-andLightfastness.pdf [30] Briggs, T. R. “The Tinting Strength of Pigments.” The Journal of Physical Chemistry, ACS Publications, 1918, p. 1 [31] Schadler, Koo. “Learn the Characteristics of Pigments.” Artists Network, 31 Aug. 2011, www.artistsnetwork.com/art-mediums/watercolor/ learning-the-characteristics-of-pigments/ [32] MacEvoy, Bruce. “The Material Attributes of Paints.” Handprint, 8 Jan. 2015, www.handprint.com/HP/WCL/pigmt3.html [33] Teichmann, Günther. “Practical Methods for Determining the Tinting Strength of Pigments in Concrete.” Technical Service Department, www. sept.org/techpapers/40.pdf [34] Hodgkins, Leila. “Refraction.” Schoolphysics, 2013, www.schoolphysics.co.uk/age16-19/Optics/Refraction/text/Refraction_/index.html [35] O'Hanlon, George. “Why Some Paints Are Transparent and Others Opaque.” Natural Pigments Inc., 6 Dec. 2013, www.naturalpigments.com/ artist-materials/transparent-opaque-paints/ [36] Saitzyk, Steven. “Characteristics of Pigments.” True Art Information, 30 June 2013, www.trueart.info/?page_id=513 [37] “Toxicity of Pigments.” The Notebook, 1993, www.noteaccess.com/MATERIALS/ToxicityPigmt.htm [38] Kinnally, Edward. “Painting Safety; Painting Hazards.” Art Prints, 2007, www.pixelatedpalette.com/artmaterialssafety.html [39] Babin, Angela, and Diane Johnson. “Metal Pigments Used in Paints and Inks.” Nontoxicprint, 2019, www.nontoxicprint.com/metalpigments. htm [40] “Toxicity of Paint Pigments.” Captain Packrat, captainpackrat.com/furry/toxicity.htm [41] Babin, Angela. “Pigment Safety.” NontoxicHub, Nontoxicprint, 2019, www.nontoxichub.com/pigment-safety [42] http://www.gopetsamerica.com/substance/biological-pigments.aspx [43] https://www.encyclopedia.com/science/encyclopedias-almanacs-transcripts-and-maps/plant-pigment-0#D [44] The Editors of Encyclopaedia Britannica. “Chlorophyll.” Encyclopædia Britannica, Encyclopædia Britannica, Inc., 17 Mar. 2020, www. britannica.com/science/chlorophyll [45] “Carotenoid Pigments: Definition & Structure.” STUDY.COM, study.com/academy/lesson/carotenoid-pigments-definition-structure.html [46] “Biological Pigments in Plants - Types of Plant Pigments: Uses of Pigments.” BYJUS, BYJU'S, 23 Dec. 2019, byjus.com/biology/pigments/ [47] “Plant Pigment.” The Gale Encyclopedia of Science, Encyclopedia.com, 23 Apr. 2020, www.encyclopedia.com/science/encyclopediasalmanacs-transcripts-and-maps/plant-pigment-0#D [48] “Plants: Causes of Color.” Plants | Causes of Color, www.webexhibits.org/causesofcolor/7H.html [49] “Plant Pigment - Flavonoids.” Flavonoids - Anthocyanins, Color, Flavonols, and Occur - JRank Articles, science.jrank.org/pages/5304/PlantPigment-Flavonoids.html [50] Rogers, Kara. “Rhodopsin.” Encyclopædia Britannica, Encyclopædia Britannica, Inc., 6 Feb. 2018, www.britannica.com/science/rhodopsin

20


'PIGMENTS' Reika Oh (Year 10, Gellhorn) 21


The Mysterious Sixth Sense of Humans:

Proprioception Bernice Ho (Year 6, Fry)

INTRODUCTION Imagine you have lost senses in your body and are not aware of your body movement, would you be scared? This is an experience that a man named Ian Waterman [1] must deal with for the rest of his life. Due to a rare autoimmune infection that attacked all his sensory neurons below his neck, he was left with the inability to feel the sense of touch and senses from below his neck. Without the feedback that comes with movement, Ian was unable to coordinate his movements in a meaningful way. When he had his eyes shut, he was completely unable to coordinate his muscle movement. Proprioception is sometimes called the “sixth sense,” apart from the well-known five basic senses: vision, hearing, touch, smell and taste. Proprioceptive sensations are a mystery because we are largely unaware of them. This article explains how proprioception works in our body, how important it is to our daily life, and what we can do to improve it. WHAT IS PROPRIOCEPTION? Proprioception is the medical term that describes the ability to sense the orientation of our body in the environment. In other words, it is basically defined as our ability to sense exactly where our body is [2]. It works unconsciously in our body and allows us to move quickly and freely without having to consciously think about where we are in an environment. Let’s try to feel our proprioceptive senses. Close your eyes and use your finger to point to your nose. Is it easy for you? Most of us should easily accomplish this little task. Why? PROPRIOCEPTION. THE SCIENCE BEHIND PROPRIOCEPTION Proprioception involves a complex signalling progress transferring proprioceptive signals from our body parts to the brain. It is a constant feedback loop within our nervous system, telling our brain what position we are in and what forces are acting upon our body at any given point in time. Proprioception is mediated by proprioceptors, which are tiny little sensors located throughout our entire body, especially in the skin, joint and muscle, and mechanosensory neurons. There are multiple types of proprioceptors working in our body (Table 1). The proprioceptive receptors are activated during distinct behaviors and encode distinct types of information such as limb velocity and movement, load on a limb, and limb limits. Thousands and millions of proprioceptive signals are sent to our brains through peripheral and central nervous

Figure 1: Muscle spindles as proprioceptors to transmit the signal of muscle stretch to the brain through the spinal cord. (Modified from Stevenlgourley,“Muscle spindles,” https://www.youtube.com/watch?v=F871bBWS4oY)

22


Table 1: Types of proprioceptors [3] systems at the same time. Our brains do tremendous jobs to integrate all these proprioceptive signals together with other sensory inputs, such as touch, taste and smell, to create and fully complete presentation of body position, movement and acceleration. Let’s take muscle spindle as a proprioceptor as an example. When we stretch our arm, the muscle in our arm relaxes and gets thinner. The muscle spindle also gets stretched out and can sense that the muscle has been stretched out. It fires a signal to the brain through the spinal cord. The brain processes the signal and generates awareness of the arm’s position (fig 1). FACTORS THAT WEAKEN THE PROPRIOCEPTIVE SENSES Reduced proprioception is when the proprioceptors do not work properly in receiving and sending information about the environment to the brain. Sicknesses like stroke, brain injuries and arthritis will lead the proprioception sense to work weakly. Our proprioception capabilities can also be impaired when joints are injured, such as with ligament sprains. Pain associated with joint injury and inflammation also impairs proprioceptive accuracy. When we lose proprioception of our joint after a sprain, we may experience an unstable sensation of the joint. Our joint may even give-out. Apart from injury, aging has been shown to reduce proprioception. This age related reduction can cause impaired postural control and increase the risk for all older individuals [4]. 23


PROPRIOCEPTIVE DYSFUNCTION One important area of research is proprioceptive dysfunction. Researchers believe that proprioceptive dysfunction is one of the major causes for the sensory processing disorders which affect children [5]. Children with sensory processing disorders are unable to use their body effectively. They feel as if they’re not in control of their body and as a result they may have difficulty concentrating in school [6]. Children suffering from proprioceptive dysfunction are uncoordinated and have difficulty performing basic normal childhood tasks and activities. They don’t experience the world like the majority of people. Without proper awareness of muscle status and joint position, children will have [7]: 1. Poor motor planning & body awareness such as difficulty understanding personal space or boundaries when playing with others 2. Poor self-regulation skills such as difficulty attending to task, mood swings, difficulty with sleep 3. Poor grading movement (i.e. how much pressure is needed to complete a task) 4. Poor postural stability such as rests head on desk while working, poor muscle tone, unable to balance on one foot 5. Sensory seeking behaviors such as taps or shakes feet while sitting, chews, pushes or hits others, writes too hard Children with proprioceptive dysfunction such as Attention Deficit Hyperactivity Disorder (ADHD) are often full of energy, it is extremely difficult for them to focus on an activity for a prolonged period of time [8]. They can easily become frustrated, give up and lose self-confidence. Their struggles with proprioceptive dysfunction manifests in their learning environment. HOW TO IMPROVE OUR PROPRIOCEPTIVE SENSES “Practice makes perfect” is a key to improve the body’s proprioceptive senses. Those suffering from proprioceptive dysfunction can enhance their proprioception by regular training. Scientists have shown that proprioceptive training resulted in an average improvement of body movement by 52% across all outcome measures [9]. Proprioceptive training can be broadly categorised into 5 types of techniques [10]. 1. Active movement and balance training These kinds of proprioceptive training require the patient to actively move limbs, segment of limbs or whole body. A common example of a balance exercise is the use of an unstable surface e.g. Foam, Harbinger, and BOSU (fig 3).

Figure 2: Balance exercises to improve proprioceptive senses using Foam, Harbinger & BOSU equipment [11] 24


2. Passive movement training As its name indicated, patient’s body parts are moved passively by an apparatus or machine (fig 3). 3. Somatosensory stimulation training This type of training employs external stimulation upon the patient's body such as electrical and magnetic stimulation, acupuncture, and vibration [13]. 4. Somatosensory discrimination training This type of training involves patients exploring opposing somatosensory stimuli and differentiating between them. For example, a patient explores different objects with their hand, discriminates between textures, or gauges the position of their wrist or ankle joints.

Figure 3: Passive movement training to improve a patient's proprioceptive sense [12].

5. Combined / multiple systems The combined / multiple systems employ two or more of the above methods to further enhance the training results. SUMMARY Proprioception is mysterious and involves a complex signalling process in our body. Scientific understanding of it is still limited and much of it is still waiting to be discovered. Without proprioception we wouldn’t be able to sense our body parts accurately. It affects our daily lives. Children suffering from proprioceptive dysfunction will experience learning delays in school and have to overcome more challenges than those without it. Although there are ways to help them to improve their proprioceptive senses, it is still very limited. Thanks to many brilliant scientists and researchers, I believe that in the future we can do more things to help them and crack more mysteries of our sixth sense.

25


BIBLIOGRAPHY [1] Simon Gandevia and Uwe Proske, “Proprioception: The Sense Within,” TheScientist, Aug 31, 2016, https://www.the-scientist.com/features/proprioception-the-sense-within-32940 [2] Khanacademy, “Proprioception and kinesthesia”, https://www.khanacademy.org/science/health-andmedicine/nervous-system-and-sensory-infor/somatosensation-topic/v/proprioception-kinesthesia [3] Prakash Jha, Irshad Ahamad, Sonal Khurana, Kamran Ali, Shalini Verma and Tarun Kumar, “Proprioception: An Evidence Based Narrative Review,” Res Inves Sports Med. 1(2). RISM.000506. 2017, p14, https://pdfs.semanticscholar.org/60b0/e483e5b17f2ffec286dedfc38bba79d384bb.pdf?_ ga=2.34305953.1453459681.1588401674-1888949605.1588401674 [4] Prakash Jha, Irshad Ahamad, Sonal Khurana, Kamran Ali, Shalini Verma and Tarun Kumar, “Proprioception: An Evidence Based Narrative Review,” Res Inves Sports Med. 1(2). RISM.000506. 2017, p15, https://pdfs.semanticscholar.org/60b0/e483e5b17f2ffec286dedfc38bba79d384bb.pdf?_ ga=2.34305953.1453459681.1588401674-1888949605.1588401674 [5] Sensory Processing Disorder, “Sensory Processing Disorder Through The Eyes of Dysfunction,” https://www.sensory-processing-disorder.com/sensory-processing-disorders.html [6] Sensory Processing Disorder, “Sensory Processing Disorder Through The Eyes of Dysfunction,” https://www.sensory-processing-disorder.com/sensory-processing-disorders.html [7] Sonoran Sun Pediatric Therapy, “ What is Proprioception and Why is it Important?,” https:// sonoransunpediatrictherapy.com/2017/11/16/what-is-proprioception-and-why-is-it-important/ [8] Integrated Learning Strategies Learning Corner, “Proprioceptive Dysfunction Causes Sensory Seeking and Sensory Avoiding Behavior,” https://ilslearningcorner.com/2016-04-proprioceptivedysfunction-causes-sensory-seeking-and-sensory-avoiding-behavior/ [9] Aman JE, Elangovan N, Yeh IL, Konczak J. The effectiveness of proprioceptive training for improving motor function: a systematic review. Front Hum Neurosci. 2015;8:1075. Published 2015 Jan 28, https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4309156/ [10] Ausmed, “Five Evidence-Based Ways to Hone Proprioception,” https://www.ausmed.com/cpd/ articles/proprioception-training [11] Dr. Anjana’s Physiorehab, “Sense of Bodily Perception – Proprioception,” http://physiorehab.in/ sense-of-bodily-perception-proprioception/ [12] Ausmed, “Five Evidence-Based Ways to Hone Proprioception,” https://www.ausmed.com/cpd/ articles/proprioception-training [13] Wang, Y, Cao, L, Hao, D, Rong, Y, Yang, L, Zhang, S,Chen, F & Zheng, D 2017, ‘Effects of force load, muscle fatigue and extremely low frequency magnetic stimulation on EEG signals during side arm lateral raise task’, Physiological measurement, vol. 38, no. 5, p. 745, https://www.ncbi.nlm.nih.gov/ pubmed/28375851

26


'SYNESTHESIA'

Emily Tse (Year 10, Keller) 27


Synesthesia: Seeing Sound and Hearing Colours Jasmine Wong (Year 10, Keller)

Imagine a world where colours flash before your eyes when you hear a sound. Imagine a world where every single letter of the alphabet had a specific colour attached to it. Surprisingly, a minority of us don’t need to imagine this, as this phenomenon happens to a very unique 4% of the world’s population. We all probably remember Skittles’ 2009 commercial slogan ‘Taste the Rainbow’. When looking at the Skittles, you might imagine the red Skittles tasting like strawberries and the yellow ones tasting like lemons. But according to a neuropsychologist, Don Katz, Skittles are in fact the same flavour. Be that as it may, this does not make you a synesthete because Skittles are scented to make you believe they are flavoured. This plays with a person’s senses by tempting them to ‘taste the rainbow’ rather than visualising it. Synesthesia is a perceptual phenomenon where you experience one sensory or cognitive pathway that leads to involuntary experiences in a second sensory or cognitive pathway. It is a rare neurological condition in which sensory modalities are crossed, for example, a sense of smell in response to a certain visual stimuli, and it is believed to be polygenic (involving many genes). Contrary to the word ‘anesthesia’, which means a loss of sensation, the word ‘synesthesia’, translated from greek, means joined sensations: ‘syn’ means together and ‘esthesia’ means sensation or to feel. There are over 60 types of synesthesia. The most common form of synesthesia is grapheme-colour synesthesia in which people who associate letters and numbers with specific colours, genders or even personalities. Dr Cytowic describes synesthetes as ‘seeing the similar in the dissimilar.’ To some synesthetes, the number 3 may be a boy who is athletic and fun, but to others it could be a girl who is studious and quiet. Other forms of synesthesia involve seeing or feeling musical notes as colours or textures (sound synesthesia). In addition, when synesthetes hear different phonemes they may experience different taste in words. This is a very rare type of synesthesia called lexical-gustatory synesthesia: to them, the words ‘college’ or ‘passage’ could taste like a sausage as the words have similar endings.

Figure 1: An example of what a person with grapheme-colour synesthesia may see 28


It is important to note that synesthesia is more of a trait than a disorder. Disorders such as delusions and schizophrenia are tested positive or negative whereas synesthesia is not. Once synesthetes establish a colour association to letters or numbers in childhood, they typically remain fixed for life. These childhood influences involve imprinting memories through exposure to cultural artifacts such as calendars, food names, musical notes and time units. A single nucleotide change in the sequence of one’s DNA as a result of this exposure can alter perception; therefore, there may be some subjective differences. In other words, exposure to influences during childhood can change the colour synesthetes associate words with. HOW DOES IT WORK? There are 4 lobes of the brain that are roughly divided up by function (fig 2). The frontal lobe is responsible for controlling important cognitive skills in humans such as emotional expression, problemsolving, impulses and more. The occipital lobe participates in vision processing and is responsible for analysing content such as shapes and colours. The parietal lobe is in charge of movement and sensation. Finally, the temporal lobe helps us speak, hear and understand the complicated combinations of speaking and listening. The exact mechanism of synesthesia is not agreed on by all neuroscientists, however most believe that the regions of the brain responsible for listening to words and processing them are next to each other.

Figure 2: A basic diagram of the cerebral cortex THE DIFFERENT THEORIES OF SYNESTHESIA There are several different theories for the cause of synesthesia: 1. Areas of the brain have anatomical names, but we refer to them as the Brodmann number. The Broadman number was identified in 1909 by a German Neurologist, Korbinian Broadmann, based on cytoarchitectonics (the study of the cellular composition of the nervous system’s tissue which can be seen under the microscope). Of the 52 regions in the cerebral cortex, it is believed that area 37, known as the fusiform gyrus, is part of the colour processing area in the occipital lobe, but it is also a part of the auditory processing which is controlled by the temporal lobe. This theory hypothesises that in synesthetes, neurons and synapses (microscopic gaps/junctions between nerve cells which impulses pass across via diffusion) that are part of one sensory system cross paths with another sensory system. It is unclear why this occurs, however scientists believe that these cross-connections are present in everyone at birth as it is an inherited biological process, but some people may lose them due to maturation processes, whereas others retain them because of their different genetic composition. Studies by multiple synesthesia experts, such as Carolyn Johnson Atwater, show that coloured-hearing synesthetes (ones who see colour in an auditory stimulus) display activity in several areas of the visual cortex when 29


they hear certain words, so areas of the visual cortex associated with processing colour are activated automatically when synesthetes hear words. 2. Richard Cytowic’s research shows that the limbic system (a set of structures in the brain that deal with emotions and memory), which includes several brain structures predominantly responsible for regulating our emotional responses, is primarily responsible for synesthetic experiences. 3. Scientists at Baylor University have demonstrated that a specific region of DNA on chromosome 16 is the reason for the grapheme-colour synesthesia (or coloured sequence synesthesia). 4. Other scientists theorise that we are born with our senses mixed up but then over time the neural bridges between our senses shut down, hence we normally experience our everyday senses separately. In this theory, synesthetes’ neural bridges may not have shut down fully, causing them to experience different senses simultaneously. 5. This theory states that synesthesia is caused by neurochemistry. It proposes that instead of being located in its assumed part of the brain, synesthetes have neurotransmitters (chemicals which enable the transmission of nerve impulses and overall functioning of the brain) located in a different region. It also states that a reason for this change of location could be that synesthetes may lack chemicals called inhibitors, which usually prevent this from occurring. 6. Simon Baron-Cohen studies and researches synesthesia at the University of Cambridge and suggests that synesthesia is a result of an overabundance of neural connections in areas of the brain such as V4 (the third cortical area in the ventral stream, receiving strong feedforward input from V2).

Figure 3: Comparison of the hyperconnectivity in the brain between a typically developing child and a child with synesthesia (Source: Cell Reports, Keown et al) 7. A psychologist at Naropa University in Colorado named Peter Grossenbacher believes that synesthesia happens when a single sense area of the brain gets feedback from multisensory areas. This information gets jumbled up unlike in a regular brain, where information from multisensory areas return only to the appropriate single-sense area. DRUGS AND SYNESTHESIA Genuine synesthesia is a consistent, predictable and quantifiable phenomenon. Drugs can bring about some aspects of synesthesia where the brain is affected by the drug, but this is completely different from the genuine phenomenon. Since the psychedelic drug LSD was discovered by Albert Hofmann in the 1930s, there have been numerous anecdotal reports about LSD triggering synesthesia. However, initial studies had a number of methodological problems, causing researchers to remain uncertain about LSD’s potential to induce real synesthesia. A group of scientists from the University of London recruited nine men and one woman for a study on LSD and synesthesia. They were all deemed to be physically and psychologically healthy. In the initial testing session, they were all injected with a saline solution to act as a placebo before completing psychological tests to measure synesthesia experience. 7-10 days 30


later, all participants were invited back. But this time they injected 40-80 micrograms of LSD. This resulted in 2 verified tests of grapheme-colour synesthesia and sound-colour synesthesia. This is because it temporarily alters the subject's neurochemistry. Nonetheless, it does not meet the consistency and specificity requirements of genuine synesthesia. CONCLUSION There is still so much more about synesthesia and the way our brains function that we can explore. Although synesthesia can cause many problems in a synesthete’s daily life such as the distracting flashing coloured lights and the mixing up of numbers, scientists believe that synesthesia can reveal something about human consciousness, for example solving the 'Binding Problem', which is how a human mind can bind all of our perceptions together and observe the same thing. Without synesthesia, many of our favourite pieces of art and music would not exist: Van Gogh would have never produced his famous Starry Night, nor would the hit song ‘Bad Guy’ by Billie Eilish and her brother Finneas O'Connell have existed. All of these artists have chromesthesia (when they hear a sound, they see colour) and used this phenomenon to their advantage to produce something wonderful for the world to see and hear. BIBLIOGRAPHY [1] Wikipedia contributors. "Synesthesia." Wikipedia, The Free Encyclopedia. Wikipedia, The Free Encyclopedia, 10 Apr. 2020. Web. 15 Apr. 2020. https://en.wikipedia.org/wiki/Synesthesia [2] SciShow. “Hearing Colors, Seeing Sounds: Synesthesia” Youtube, 31 July. 2012, https://www. youtube.com/watch?v=vEqmNX8uKlA [3] Kate Kershner "How Synesthesia Works" 22 June 2007. HowStuffWorks.com. 15 April 2020 https://science.howstuffworks.com/life/inside-the-mind/emotions/synesthesia.htm

[4] Mayim Bialik. “Do You Hear in Color?! Explaining Synesthesia | Mayim Balik feat. Life Noggin“ Youtube, 31 August. 2017.https://www.youtube.com/watch?v=0e4zSrGpGt0

[5] Eric W. Dolan “ Does LSD induce genuine synesthesia - or something different?” PsyPost. 17 May, 2016. https://www.psypost.org/2016/05/lsd-induce-genuine-synesthesia-something-different-42812 [6] Melissa Lee Phillips “Synesthesia” Neuroscience for Kids http://www.neuroanatomy.wisc.edu/ selflearn/Synesthesia.htm [7] Dr. Veronica Gross “The Synesthesia Project” https://www.bu.edu/synesthesia/faq/

[8]Alina Bradford “What is Synesthesia” Live Science. 18 October, 2017 https://www.livescience. com/60707-what-is-synesthesia.html

[9] Richard E. Cytowic “What color is Tuesday? Exploring synesthesia - Richard E. Cytowic” Ted-Ex. Youtube. 10 June, 2013, https://www.youtube.com/watch?v=rkRbebvoYqI [10] Richard E. Cytowic “ Synesthesia” (MIT Press Essential knowledge series) Page 27.

[11] Brainemy “The Brain Explained | Cerebral Cortex - Frontal Lobe - Parietal Lobe”. Youtube. 11 July, 2018 https://www.youtube.com/watch?v=gGeZaEABacE&t=85s [12] Dr, Sayan Adhikari “Synesthesia - The blending of senses” Good Morning Science. Bio-Sci, Health Science. 16 June, 2018. https://gmsciencein.com/2018/07/16/synesthesia-blending-senses/

31


'THE NOCEBO EFFECT' Selyn Lim (Year 11, Wu)

32


The Nocebo Effect Edward Wei (Year 10, Peel)

The nocebo effect is a hypothesis that proposes the negative expectations of procedures or situations can lead to, or worsen, symptoms. It is a recondite feature of society that could be responsible for a variety of diseases around the world [1]. The term nocebo, Latin for “I will harm”, was coined by Walter Kennedy in 1961 [2]. You may be more familiar with its counterpart - the placebo effect (Latin for “I will please”), beneficial effects brought upon by expectations of positive outcomes when taking drugs or undergoing treatments. Both nocebo and placebo are considered nonspecific effects of medical treatments, meaning that they are the results of an intervention that do not result from a specific diagnosis or explanation. It is called the placebo effect when positive and nocebo effect when negative [3]. Both nocebo and placebo effects are seen as psychobiological phenomena arising from a therapeutic context [4].

Figure 1: Picture depicting the nocebo effect (Source: https://braininlabor.com/2018/01/23/nocebo-effect/)

THE MECHANISM BEHIND NOCEBO Several factors may cause a nocebo effect in patients by prompting negative expectations of the situation in various ways. The negative expectancies can be induced verbally like being told of the side effects by the behaviour of the provider or by the patient-physician relationship. For example, in one study (Howe et al. 2017), patients were induced with allergic reactions through a histamine skin test and then given a cream with no active ingredients by a healthcare provider who demonstrated either high or low warmth, and high or low competence. They were then told that the cream would either increase or reduce the allergic reaction. The result was that those who the caretaker treated with high warmth and high competence experienced better results according to the expectations they have of the effects of the cream and vice versa [6]. In addition, the use of certain words can increase or decrease the extent of nocebo. For example, prior to the injection of a local anaesthetic, “pain” resulted in more pain as opposed to “a cool sensation”. This is the same for “you will feel a bee sting” resulting in more pain than “it will numb the area ” (Wells 2012) [7]. The characteristics of the patient can also determine the chances of the nocebo effect taking place. Those with aggressive and competitive personalities tend to experience nocebo more, being 94% more likely to report side effects (Drici et al. 1995) in an experiment where 52 participants (26 males and 26 females) were separated into the competitive and aggressive group and non-competitive and passive group according to the Bortner Rating Scale. Each subject then received a drop of placebo in one eye and a drop of the 33


active drug in another 4 times a day for a week. However, it is also possible that the higher vulnerability of competitive and aggressive people could be attributed to their generally more stressful lives rather than a direct result of their personality [8]. Women also seem to be more susceptible to nocebo than men, as seen in an experiment where 48 healthy men and women received a salient oral stimulus and a verbal suggestion that it would enhance nausea. They were rotated once as a control group then separated into 2 equal groups of women and men. One group was given the stimulus then rotated once per day for 3 days to examine how they would respond to conditioning. Another group was given the stimulus then rotated 5 times each for 1 minute to test how they would respond to expectancy. The experiment concluded that women responded stronger to conditioning-induced nocebo and men responded to expectancy-induced nocebo but to a lesser extent [9]. It was also noted that optimists responded to placebo more while pessimists responded to nocebo more [10]. Furthermore, nocebo effects have some emotional and neurobiological correlations (Schienle et al. 2018). It involved 38 women being shown images of disgusting, fear-inducing and neutral categories while presented with an odourless stimulus of distilled water with green food colouring together with the verbal suggestion that the fluid smells bad. 76% of women (29) perceived a slightly unpleasant odour and the disgusting smell experienced increased when viewing disgusting images, while the last 9 reported no response [11]. Past experiences of the patient can also contribute to nocebo [12]. As shown in one study (Witthoft 2012), participants were randomly assigned to watch a television report about the adverse effects of WiFi and then exposed to a fake WiFi signal. 54% of the 147 participants reported symptoms which they attributed to the fake exposure. Moreover, participants who have been exposed to similar reports on adverse effects of WiFi were 22% more likely to experience nocebo in the experiment [13].

ETHICAL IMPLICATIONS OF NOCEBO

Figure 2: The extent of nocebo effect in increasing risks of adverse effects (Source: ResearchGate) Due to the nature of nocebo, it has raised many ethical issues because testing nocebo can be quite painful, and even potentially dangerous for subjects. In one experiment, a 26-year-old male took 29 inert capsules believing he was overdosing on an antidepressant. He subsequently experienced hypertension requiring intravenous fluids to maintain adequate blood pressure until the true nature of the capsules was revealed, after which the effects dissipated within 15 minutes [14]. Furthermore, nocebo leads to quite a dilemma. On one hand, physicians are obligated by law to inform the patient of any adverse effects. On the other hand, it is the physician’s duty to minimize any risks of treatment suggesting that informing patients should not be done as it could lead to nocebo increasing the risk of harmful effects. There are several strategies to reduce this dilemma. One is using more positive language to describe adverse effects. Two is by permitting non-information, allowing the patient to decide whether or not to know the detrimental effects while cautioning them that knowing the effects could increase the risks of potential issues. Finally, properly educating the patient has also been shown to reduce the effects of nocebo [4, 15]. These are not 34


perfect solutions though; effectiveness varies across different cultures and different personalities. Some people may subconsciously view the act of trying to withhold information as hiding how dangerous the treatment may be [16].

THE NEUROBIOLOGY OF NOCEBO For quite some time, neurologists believed that the presence of nocebo responses is evidence of symptom exaggeration or evidence that symptoms are psychogenic rather than organic. However, this has been disputed as many animal trials (and less commonly, human trials) have shown that nocebo can induce actual physiological changes [17]. Such experiments include exposing guinea pigs or humans to things they are allergic to, like chocolate, while exposing them to a stimulus, such as an auditory cue, once or several times a day. Thus they become conditioned to have an allergic response every time they are exposed to the conditioned stimulus, even when the actual substance they are allergic to is not present [18]. Another piece of evidence that supports the neurobiological mechanisms of nocebo is that certain antagonist drugs (chemicals that inhibit a physiological action) and agonist drugs (chemicals that encourage or start a physiological action) can stop nocebo responses, such as in an experiment that found that proglumide an antagonist chemical - could prevent nocebo-induced hyperalgesia (a condition where patients develop an increased sensitivity to pain) [17]. The effects of nocebo seem to be related to anxiety as shown in the workings of Fabrizio Benedetti, a professor of physiology and neuroscience at the University of Turin Medical School in Italy. The expectancy of something bad happening induces anxiety and causes the secretion of neurotransmitter cholecystokinin, or CCK (CCK also functions as a hormone that stimulates the digestion of protein and fat). CCK facilitates negative feelings like pain, so when CCK antagonists were given, it stopped nocebo generated pain (the aforementioned proglumide is a CCK antagonist). However, CCK antagonists don’t work against placebo, proving that while the effects of placebo and nocebo are similar, the mechanisms behind them are slightly different [19].

CONCLUSION

Figure 3: Venn diagram of all different factors that could play a role in placebo and nocebo effects. (Source: PubMed) Nocebo is the lesser-known “evil twin� of placebo. It is caused by many different factors that can be divided into 3 components: expectations, beliefs and past experiences of the patient, the expectations and beliefs of the physician and beliefs and expectations engendered within the relationship between the 35


two parties. Its effects are surprisingly powerful - so long as one’s belief is powerful enough, it can even cause death. Understanding how nocebo works is critical in medicine to reduce the risks of the adverse effects of various treatments and improving the evaluation of the true effects of new drugs.

BIBLIOGRAPHY 1. 2. 3. 4. 5.

https://www.sciencedirect.com/science/article/pii/S0091743596901243?via%3Dihub https://psychcentral.com/blog/the-other-side-of-the-placebo-effect/ https://ptpodcast.com/understanding-specific-and-non-specific-effects/ https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3401955/ https://www.sciencedirect.com/sdfe/pdf/download/eid/1-s2.0-S0091743597902280/first-pagepdf 6. https://psycnet.apa.org/buy/2017-10534-001 7. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3352765/ 8. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1364964/ 9. https://www.sciencedirect.com/science/article/abs/pii/S0022399908004716 10. https://www.youtube.com/watch?v=opc3tYHbKV4 11. https://link.springer.com/article/10.1007/s11682-017-9675-1 12. https://www.sciencedirect.com/sdfe/pdf/download/eid/1-s2.0-S0091743597902280/first-pagepdf 13. https://www.sciencedirect.com/science/article/abs/pii/S0022399912003352 14. https://www.sciencedirect.com/science/article/abs/pii/S0163834307000114?via%3Dihub 15. https://www.researchgate.net/publication/302780910_Nocebo_Effects_The_Dilemma_of_ Disclosing_Adverse_Events 16. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5655643/ 17. https://neuro.psychiatryonline.org/doi/full/10.1176/appi.neuropsych.13090207 18. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3130401/ 19. https://www.vice.com/en_us/article/59xe9b/the-power-of-the-nocebo-effect-v26n1 GENERAL https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4804316/ https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6176716/ https://www.frontiersin.org/articles/10.3389/fpsyt.2019.00555/full#B30

36


Visionaries for Vision Joaquin Sabherwal (Year 6, Shackleton)

INTRODUCTION We would all like to be young and healthy forever. However, as people inevitably age, their hair will begin to decolour, their joints will become stiffer, and their skin will begin to wrinkle. The wear and tear of our bodies is unavoidable as we get older, but perhaps one of the worst aspects of the ageing process is the possibility of losing one’s sight.

HOW DOES THE EYE WORK? Light reflects off objects and into our eyes. These rays of light enter the eye via the cornea, the pupil and the lens, and then focus on the retina. The retina is a light-sensitive tissue at the back of the eye, which sends signals through the optic nerve to the brain. It is these signals that become the images that we see [1].

Figure 1: Light reflects off objects and into our eyes [1][2] The macula is the area of the retina that gives us our central vision, and the peripheral retina surrounds the macula and gives us our side vision. On the retina, there are about 130 million tiny cells called photoreceptors [2]. These rod- and cone-shaped cells are input cells; they turn light into electrical currents that travel from the output cells, through the optic nerve and to the brain. When light enters the eye, it strikes these cells, which triggers a chemical reaction that sends signals to the brain [3].

Figure 2: The Retina and Photoreceptors [3]

37


MACULAR DEGENERATION AND RETINITIS PIGMENTOSA

An overwhelming number of people in the world have a wide range of conditions that make them visually impaired, but the main cause of age-related vision loss is macular degeneration. It is a disease that gradually deteriorates the central vision (macula), leaving a blur or a black hole in the focal point of your view. Retinitis pigmentosa is another common disease that damages the photoreceptors. This disease isn’t restricted to the elderly, as it is genetically inherited and affects 1 in 4,000 people in the USA, regardless of age. Both macular degeneration and retinitis pigmentosa attack the photoreceptors and in the most advanced forms of these diseases, everyday ordinary tasks (like reading, driving, writing, or even just walking around safely) would be almost impossible without assistance. Although the damage to the photoreceptors caused by these diseases is serious, the remainder of the retina’s neurons and cells that transfer electrical signals are usually still intact, which means there is hope to cure vision impairment. If scientists are able to find a way to invent something that can imitate the function of photoreceptors, then the signals would be able to be transmitted to the brain in the way that they’re supposed to. Thankfully, that is happening as technology is advancing.

THE INNOVATIONS SO FAR Researchers have been attempting to invent prosthetic devices that can mimic and fulfil the purpose of the rod- and cone-shaped cells so that blind people might get their vision back. By implanting a tiny device that interacts with the tissue that makes up the retina, scientists have found a way to take the information captured by a camera lens embedded in a pair of glasses that the patient wears, and convert it into electrical signals that get transferred to the brain via the optic nerve. However, most prosthetic devices only provide limited vision, such as bright lights and high-contrast edges. Argus II, which is a device created by a company called Second Sight, helps patients distinguish patterns of light and identify outlines of basic shapes and movement, therefore helping them to navigate the world more independently. Once the Argus II is implanted at the back of the eye by an experienced retinal surgeon, it is accompanied by external glasses with a built-in camera and a small portable pocket computer, which is a vision processing unit (VPU). The VPU processes what the camera is seeing into instructions that then get sent to the glasses, and the antenna transmits the information wirelessly to the implant in the eye. The implant consists of another antenna (which receives the information) and an array of electrodes. When the information is received, the electrode array fires small pulses of electricity which are sent along the optic nerve to the brain, and the patient learns to interpret patterns of light. Argus II has already been commercialised and is available for patients [4][5].

Figure 3: Argus II [5]

38


CRACKING THE CODE One of the lead scientists working on restoring sight to the visually impaired is an American neuroscientist called Sheila Nirenberg. A neuroscientist is someone who studies the nervous system, including the brain, the spinal cord and nerve cells throughout the body. Her main focus is to decipher and learn the language that the brain understands. To understand the importance of this language, it is essential to examine how the information is actually processed when the brain receives an image.

Figure 4: Healthy Retina [6] Figure 4, from left to right: when an image enters the eye, it lands on the front-end cells of the retina, the photoreceptors, then it is processed through the retinal circuitry, which extracts information. That information is next converted into code in the form of electrical pulses then travels to the brain. Figure 5 demonstrates how the electrical pulses are sent in a specific pattern that tell the brain what the image is.

Figure 5: Healthy retina sending electrical pulse codes [6] It’s a very complicated process, as every millisecond, these patterns of pulses are constantly changing, along with the world around you. When the front-cells of the retina shut down due to a degenerative disease like macular degeneration, the retinal circuitry is next to shut down, and although the output cells are left intact, they are no longer transmitting any code. Prosthetic devices, or bionic eyes, such as Argus II, are most certainly innovative, however they are limited to seeing images that are simple. The device allows the patient to see spots of light and basic shapes, but they are far from providing patients with normal representations of faces, buildings, landscapes, etc. The issue lies in the way that the stimulators are producing code. Sheila Nirenberg suggests that to make the image clearer, there is a need to drive the stimulators to produce normal retinal output. She says, “having the code is only half the story. The other part is having some way to communicate that code into the cells so that they can send it to the brain.�

39


Nirenberg is currently working on a unique prosthetic system that is made up of an encoder and a transducer. Like the Argus II, Nirenberg’s device also relies on a camera that is embedded in a pair of glasses and the information captured by the camera is sent to the device wirelessly. However, this system does not send electrical pulses directly along the optic nerve. Instead the encoder converts the information into a code that closely matches the code that a healthy retina uses to communicate with the brain and a transducer drives the output cells in the eye (ganglion cells) to fire electrical signals as the code specifies. Essentially, the image goes through a set of equations which mimics the retinal circuitry, and then it comes out as electrical pulses which then travel along the optic nerve.

Figure 6: A damaged retina, with device replacing the photoreceptors [6] What also makes this different is the pattern of the electrical signals that are fired out. In order to develop this prosthetic system, Nirenberg and her team have had to delve into neuroscience to study what this ‘code’ (pattern of electrical signals) looks like. Trying to understand this code is like learning a new language. The more that they are able to learn how the brain understands this language, the more they are able to mimic the language and therefore communicate with the brain as a normal retina would. This way, they are literally “cracking the code” and finding ways to get the firing patterns to match the activity of normal retinal output, and thus are getting closer to giving the patient an accurate representation of the image in front of them [6][7][8].

CONCLUSION There is hope for visually impaired people in the future! As technology advances, the understanding of how the body works is deepening, and we will soon be able to create bionic versions of our body parts. Sheila Nirenberg suggests that with her device, there is a potential to enhance it using ultraviolet or infrared light so it could even make blind people have better sight than regular people. This means that not only would people have their sight restored, they would have even better eyesight as their bionic eye would have abilities that a natural eye wouldn’t. This innovation might even further our understanding of the way that the brain relates to our body parts, and there may be ways to use the same principles discovered here to cure people with other problems. The potential power of being able to communicate with the brain means that the same strategy could be used for the auditory system and the motor system to help people who have auditory issues or motor disorders. By jumping over damaged circuitry in the same way as they have done with the retinal circuitry, scientists could cure numerous other physical impairments - these visionaries for vision could potentially change the lives of more than just the visually impaired! [6]

40


BIBLIOGRAPHY [1] EyeSmart — American Academy of Ophthalmology, “How the Eye Works” https://www.youtube. com/watch?v=8e_8eIzOFug – 18/03/2018 [2] EyeSmart — American Academy of Ophthalmology, “How the Eye Works and the Retina” https:// www.youtube.com/watch?v=Sqr6LKIR2b8 – 30/11/2010 [3] Science Art, “How Retina Works Animation-Physiology of the Eye Videos” https://www.youtube.com/watch?v=GkJrQmVRkYM -24/01/2019 [4] Wei-Haas, M., (2017, Oct 19) “Could This Bionic Vision System Help Restore Sight?” https://www. smithsonianmag.com/innovation/could-bionic-vision-system-help-restore-sight-180965305/ [5] SecondSight, “Discover Argus II” https://www.secondsight.com/discover-argus/ - 2019 [6] Nirenberg, S., (2013 Jun 26) “TED-Ed - A prosthetic eye to treat blindness - Sheila Nirenberg” https://www.youtube.com/watch?v=RR08NcoBlms [7] Nirenberg, S., (2018, Nov 30) “Cracking The Code To Treat Blindness | Mach | NBC News” https:// www.youtube.com/watch?v=76cWyxzX7ds [8] Nirenberg, S., & Pandarinath, C., (2012, May 7) “Retinal prosthetic strategy with the capacity to restore normal vision” https://physiology.med.cornell.edu/faculty/nirenberg/lab/papers/PNAS-2012Nirenberg-1207035109.pdf

Azvolinsky, A., (2018, Apr 10) “Vision Restored: The Latest Technologies to Improve Sight”, https:// www.the-scientist.com/news-opinion/vision-restored-the-latest-technologies-to-improve-sight-30104 Barker, P., (2018, Oct 1) “5 inventions bringing sight to the visually impaired” https://www.redbull.com/ int-en/inventions-to-help-visually-impaired-people Boseley, S., ( 2018, Mar 19) “Doctors hope for blindness cure after restoring patients’ sight” https:// www.theguardian.com/society/2018/mar/19/doctors-hope-for-blindness-cure-after-restoring-patientssight Gallagher, J., (2012, May 14) “Light-powered bionic eye invented to help restore sight” https://www. bbc.com/news/health-18061174 Jeffries, A (2016, Apr 5) “The Technology That Could Make Blind People See Again” https://youtu.be/ SJUWPD62MTI Loeffler, J., (2018, Dec 28) “5 Medical Innovations That May Help Cure Blindness:https:// interestingengineering.com/5-medical-innovations-that-may-help-cure-blindness McDougall, B., (2018, Jun 8) “Australian world-first bionic eye invention ready for sight” https://www. kidsnews.com.au/technology/australian-worldfirst-bionic-eye-invention-ready-for-sight/news-story/93 146b354bdb2b25331664e7300f53c2

41


'BIOLUMINESCENCE' Kayan Tam (Year 12, Wu)

4242


Bioluminescence: The Mystery Behind the Light Hoi Kiu Wong (Year 12, Wu)

INTRODUCTION Have you ever taken a stroll along the beach and had something intriguing catch your eye? A microscopic creature, glowing in the pitch-black sea? Unfortunately, I have only seen this ‘scene’ in documentaries; but that was already enough to grab my attention. As I have researched bioluminescence, I have discovered how essential it is in the lives of millions of different species - from insects like fireflies to jellyfish, the list seems endless. But the question remains: what is the science behind it?

THE UNIQUENESS OF BIOLUMINESCENCE Bioluminescence is defined as ‘the emission of light from living organisms (such as fireflies, dinoflagellates, and bacteria) as the result of internal, typically oxidative chemical reactions’ [1]. The light produced by the result of these ‘chemical reactions’ is what sets bioluminescence apart from other natural optical phenomena, such as fluorescence and phosphorescence. Fluorescent molecules, unlike bioluminescent molecules, do not produce their own light; they absorb photons, and in turn excite electrons to a higher energy state. As these electrons relax to their ground state, they re-emit their energy at a longer wavelength. This excitation and relaxation happens very quickly, therefore fluorescent light is only seen while the specimen is being illuminated [2] (Figure 1). Similarly, phosphorescence also re-emits light; however it does so over a longer time scale, rather than immediately, and continues after excitation occurs [3]. It is important to understand the distinction between fluorescence and bioluminescence, because there is often confusion between them due to the fact that in some organisms, bioluminescent energy is used to cause fluorescence [4].

Figure 1. Difference between the bioluminescence and fluorescence. In bioluminescence, light is a by-product of an oxidation reaction (Source: Thermofisher Scientific [11]) 43


DISTRIBUTION OF BIOLUMINESCENCE An apparent peculiarity of bioluminescence is that there is no obvious rule or reason in the distribution of luminous species among microbes, protists, plants and animals. Harvey (1940, 1952) expressed it in this way: “It is as if the various groups had been written on a blackboard and a handful of damp sand cast over the names. Where each grain of sand strikes, a luminous species appears. It is an extraordinary fact that one species in a genus may be luminous and another closely allied species contains no trace of luminosity.� Putting this into context, the phyla Cnidaria, which includes jellyfish, and Ctenophora (commonly known as 'comb jellies') have received the most sand: many members of the former phylum and nearly all of the latter are luminous, whereas there are certain phyla that contain no luminous organisms. There are also some cases in closely related genera of the same family where one genus is luminous while the others are not. Another peculiarity of bioluminescence is that more bioluminescent organisms are marine creatures rather than terrestrial or freshwater inhabitants. There are very few non-marine organisms that are bioluminescent, hence they can easily be listed here: fireflies and beetles, earthworms, millipede Luminodesmus, limpet Latia, snail Quantula, the glow worms Arachnocampa and Orfelia, and luminous mushrooms [12].

CHEMISTRY OF BIOLUMINESCENCE

Figure 2: Luminous Fish of the Deep Sea Drawing by Holder, Charles Frederick (1892) Along the Florida Reef (Source: Wikimedia Commons) Bioluminescence has captured the interest of mankind ever since ancient times - descriptions of light emitted from fireflies can be found in folklore, songs and in numerous publications of literature (Figure 2); there is therefore no doubt that studies on Bioluminescence had already began in the early 17th century. However, its chemical study only originates from the early 20th century, when human research and technology began to become progressively more advanced [5]. LUCIFERIN-LUCIFERASE REACTION Bioluminescence can be produced by the oxidation of the molecule luciferin (Figure 3), which is the 'oxidizable substrate' [8], and the rate of the reaction between luciferin and oxygen can be controlled by a catalysing enzyme, either a luciferase or a photoprotein [2] such as aequorin [7]. The luciferin-luciferase reaction was first demonstrated by Dubois in 1885 when he made two aqueous extracts from the luminous West Indies beetle Pyrophorus. One of the extracts was prepared by crushing the light organs of the 44


beetle in cold water, resulting in a luminous suspension. The luminescence gradually weakened and finally disappeared. The other extract was prepared in the same way but with hot water before being cooled, and the use of hot water immediately put out the light. However, the two extracts produced light when mixed together. Dubois then repeated the experiment with the extracts of the clam Pholas dactylus and received similar results. Therefore, he concluded that the cold water extract contained a specific, heat labile(1) enzyme which is necessary for the light-emitting reaction, and introduced the term ‘luciferase’ for this enzyme. He also concluded that the hot water extract contained a specific, relatively heat stable substance called ‘luciférine’ (now spelled luciferin). Hence, the luciferin-luciferase reaction is an enzyme-substrate reaction that emits light [12].

Figure 3: D-luciferin - the luciferin found in fireflies (Source: Wikipedia) Another person who made a significant contribution to the study of bioluminescence was E. Newton Harvey (1887-1959). In 1917, Harvey conducted experiments on bioluminescence but found that the light observed was weak and ‘short-lasting’. Following up in 1947, McElroy found that the light-emitting reaction requires ATP (adenosine triphosphate) as a cofactor(2), which was discovered in the firefly system [12]. Adding ATP to the mixtures of luciferin and luciferase resulted in a bright, long-lasting light. In fact, this was not a simple experiment at the time as ATP was not commercially available, thus this discovery was a huge breakthrough for the chemical study behind bioluminescence [5]. In 1949, McElroy and Strehler further found that luminescence reactions require another ion or cofactor Mg2+ (or Ca2+) in addition to luciferin, luciferase and ATP [5], which in turn causes a conformational change in the photoprotein, thus giving the organism a way to precisely control light emission [2]. There is a large chemical variety of luciferins as they have derived from many evolutionary lineages [2], thus all of these bioluminescence reactions vary, except that they all require oxygen at some point. Harvey stated in his 1952 book “It is probable that the luciferin or luciferase from a species in one group may be quite different chemically from that in another” after he discovered that the luciferin of the clam Pholas differs from that of the ostracod Cypridina (Harvey, 1920) [12]. However, this belief did not last long as it was discovered that a chemically identical luciferin can appear in unrelated organisms. For instance, around 1960, a luciferin identical to the luciferin of Cypridina was discovered in the luminous fishes Parapriacanthus and Apogon. Moreover, in the 1970s, it was discovered that coelenterazine, a luciferin, is the light emitter in many other groups of organisms: protozoans, jellyfish, crustaceans, molluscs, arrow worms and vertebrates. The likely explanation for this is that luciferin is acquired exogenously(3) through the diet and they are relatively easy to obtain as they are present in both luminous and non-luminous marine animals. However, the complete biosynthesis pathway is still not completely understood for any marine luciferins, so their ultimate origins are still unknown [2]. Furthermore, delving deeper into the aspect of its chemical structure, some investigators found that highly purified extracts of luciferin contains a -COCH2OH side chain, which is oxidatively degraded to -COOH in the luminescent region. Hence, Cypridina luciferin is readily oxidized by many oxidizing agents, but produces light only when oxidized in the presence of luciferase [8]. It is still not known how many types of luciferin there are, but those that are better-studied are D-luciferin (found in fireflies), coelenterazine (most widely used luciferin in the sea) and Vargulin/Cypridina luciferin [2] (Figure 4). 45


Figure 4: The different types of luciferins used by marine organisms Shown are the molecular structure of the specific luciferins, mode of operation, and taxonomic groups known to use them. In the last column, taxa containing unique, characterized luciferins are listed above the dashed line, whereas those that are unknown or poorly understood are below the dashed line [2] PHOTOPROTEINS It was widely believed that bioluminescence was only derived from luciferin-luciferase reactions until the discovery of the photoprotein aequorin in the jellyfish Aequora in 1962. Aequorin emits light in aqueous solutions by the simple addition of Ca2+, regardless of the presence or absence of oxygen (Figure 5). The luminescence is emitted by an intramolecular reaction of the protein, the total light emission being proportional to the amount of the protein used. The definitions of luciferin or luciferase did not match up with the properties shown by aequorin, thus making it an exception at the time. However, in 1966, another bioluminescent protein in the parchment tube worm Chaetopterus was discovered, this one emitting light when a peroxide and Fe2+ are added; and as with aequorin, it was found that the total light emission was proportional to the amount of the protein used. Hence, a new term ‘photoprotein’ was introduced for these unique bioluminescent proteins (Shimomura and Johnson, 1966). 46


Figure 5: Aequorin gives off light in the presence of Ca2+. Coelenterazine and apoaequorin are the components of aequorin (Source: Royal Society of Chemistry) The aequorin photoprotein is an enzyme-substrate complex that is more stable than its dissociated components of enzyme and substrate. Due to its greater stability, the photoprotein complex occurs as the primary luminescent component in the light organs of luminous organisms instead of its dissociated components. In the light organs of Aequorea, the complex aequorin is highly stable when Ca2+ is absent, but its less stable separate components of apoaequorin (enzyme) and coelenterazine (substrate) are hardly detectable in the jellyfish [12].

BIOLUMINESCENT ORGANISMS Bioluminescence spans in a range of ecosystems; the most comprehensive list of bioluminescent genera, assembled by Herring (1987) and Harvey, reports that of the seventeen phyla in the animal kingdom, at least eleven contain luminous forms [8]. This property of luminescence may be emitted by marine animals either by their luminescent organs or the bacteria on their bodies. Previously, there were many speculations that this light is produced by bacteria; however, modern scientists and researchers have proven that the bacteria emit light only after they have developed exponentially on dead fish and other organisms. Therefore, often, bioluminescence in the sea is usually due to large numbers of jellyfish, snails, fish, dinoflagellates and many other species [8].

Figure 6: Bioluminescence of dinoflagellate N. scintillans in the yacht port of Zeebrugge, Belgium By © Hans Hillewaert, CC BY-SA 4.0 https://commons.wikimedia.org/w/index.php?curid=10711494 Alongside fireflies, dinoflagellates are one of the most commonly encountered bioluminescent organisms. They are microscopic in size (range from about 30 μm to 1mm) and are often found in coastal regions [18], where a large number of them create specks of light seen in the water. Therefore, they create red tides, the phenomena in which the water is discoloured due to the high abundance of them (the dinoflagellates) rapidly growing and accumulating. At night,‘bioluminescent bays’ are the result of the bioluminescent dinoflagellates creating a beautiful ‘sparkle’ along the beach and they have become famous tourist destinations in Puerto Rico and Jamaica [2].

47


FUNCTIONS OF BIOLUMINESCENCE During the day, sunlight filters down into the ocean water, increasing the visibility. This means that many marine creatures would be much more vulnerable to predators as there are no hiding places at all in the shallower water, hence many would swim deeper into the ocean where less visible light reaches them. Therefore, this results in massive animal migration patterns in the planet’s oceans - animals vertically migrate upwards during the night when it is dark so they can look for food, and vertically migrate downwards when there is daylight in order to hide. As a consequence of this migration, most of these creatures spend a lot of time in dim light or in total darkness. Thus, bioluminescence helps them with their survival in different ways [10]. To attract or locate prey Bioluminescence can be used by marine organisms to attract prey. An example of this is that some fish have red-emitting light organs located under their eyes, and the unusual long-wavelength sensitivity of the fish eye suggests that their red luminescence may be used to illuminate prey that are blind to red light, thus helping them hunt down prey [2]. The Angler fish is another example of a marine organism using bioluminescence to attract prey - many species of them have a small glowing bulb known as the esca (the “lure”) dangling in front of their mouth which contains tiny luminescent bacteria called photobacterium, hence they are able to attract prey and attack it. This creates a ‘win-win situation’ for both the anglerfish and the bacteria because the anglerfish is able to lure in prey and in exchange, the bacteria gains protection and nutrients from the fish as their host [15]. To act as a defense against predators This is one of the most common functions of bioluminescence in the sea. Many marine creatures such as crustaceans, squid, jellyfish and fish release their light-emitting chemicals into the water, producing clouds or particles of light that distracts or blinds the predator (known as a ‘smokescreen’). Some even squirt their predators with luminescent slime, making them easy targets for secondary predators. Another way that these organisms use their bioluminescence when attacked is to lure in the secondary predators, thus giving them an opportunity to escape, as the first attacker would try to escape as well. Some organisms also use their bioluminescence as a warning to predators, signalling the unpalatability(4) of the prey [9]. Camouflage Counterillumination (Figure 6) is a process in which some marine organisms such as the hatchet fish or the poryfish use to camouflage themselves. The silhouette of the opaque animal is replaced with bioluminescence of a similar colour to the background of the ocean, blending in with the light filtering from the sky. This is most common amongst fishes, crustaceans, and squid that inhabit the twilight zone of the ocean where many predators have upward-looking eyes adapted to locate silhouettes of prey [10].

Figure 6: Counterillumination (Source: Smithsonian Institution)

Keeping the school together Luminescent shrimp, squid and fish form schools, and many of them show vertical migration during the day and night. Thus, the luminous flashes from these marine organisms help keep the school together, as the light they emit can be detected over large distances. Dennell (1955) believed that the light of bathypelagic decapod crustaceans could be seen at distances up to 100m. Moreover, it is likely that luminescence is subject to diurnal rhythmicity and that the members of a school may mutually stimulate 48


each other i.e. a luminescent euphausiid or a myctophid flashes light, and other individuals of the species may flash in turn as they are stimulated by the luminescence. Hence, it is widely believed that the light helps with regulating the degree of rising and sinking that the school needs to execute. Kampa & Boden (1957) found that flashing became most frequent during twilight migration, and the mean intensity, 1 Ă— 10-4 ÎźW/cm2, equalled that of the light-level with which the migration of the animals was associated [9]. Courtship Fireflies are well known for their bioluminescence during the warm summer months as they use their light to attract members of the opposite sex [13]. This is done through means of species-specific spatial or temporal patterns of light emission. [9] This can also be seen in marine life; for instance, the male Caribbean ostracod, a tiny crustacean, uses bioluminescent signals on its upper lips to attract females; Syllid fireworms are inhabitants of the seafloor, but when there is a full moon, they move to the open water where the females of some species, like Odontosyllis enopla, use bioluminescence to attract males while moving around in circles [17]. Some other functions of Bioluminescence are included in Figure 7.

Figure 7: Functions of bioluminescence; used for defence (blue), offence (magenta), and intraspecific communication (gray). Some animals are use their luminescence in 2,3 or even 4 different roles [2] 49


Luminous bacteria in the sea Luminous bacteria form specific symbioses with some marine fish and squid, and this creates a ‘winwin situation’ for both the bacteria and the marine organism. This is because the bacteria provides the host with light that can be used to attract prey and to find a mate, while the host provides the bacteria with an ideal growth environment. For free-living bacteria where the adaptive value is less evident, the most generally accepted hypothesis is that the luminous bacteria growing on fecal pellets may serve as an attractant, causing the pellets to be consumed and in turn introducing the bacteria to an animal’s nutritious stomach and intestine [10].

APPLICATIONS OF BIOLUMINESCENCE ANALYSIS Bioluminescence does not only help marine animals or fireflies, they are also used as analytical tools in various fields of science and technology. For example, firefly bioluminescence is used as a method of measuring ATP (vital for living cells) [12]. This is done by adding a known amount of luciferin and luciferase to a blood or tissue sample, where the cofactor concentrations may be determined from the intensity of the light emitted [13]. Ca2+-sensitive photoproteins e.g. aequorin from a jellyfish, can also be used in monitoring the intracellular Ca2+ that regulates many important biological processes, and certain analogues of Cypridina luciferin are utilized as probes for measuring superoxide anion, an important but rare substance in biological systems. Furthermore, the green fluorescent protein (GFP), which was discovered alongside aequorin, is used as a highly useful marker protein in biomedical research [12]. In methods similar to that of measuring ATP, scientists have also used bioluminescent reactions to quantify other specific molecules that are not involved in a bioluminescence reaction. They do this by attaching luciferase to antibodies, and the antibody-luciferase complex is then added to a sample where it binds to the molecule to be quantified. Following washing to remove unbound antibodies, the molecule of interest can be quantified indirectly by adding luciferin and measuring the light emitted. Methods used to quantify certain compounds in biological samples such as the ones described here are called assays [13].

Figure 8: Principle of simple luciferase reporter assay. In fact, luciferases are good reporter enzymes in the field of bioresearch. They are widely used in various aspects of biological functions, such as gene expression, post-translational modifications, and proteinprotein interaction in cell based assays (Figure 8) [16]. This means that luciferases are used to study how individual genes are activated for protein expression or repressed to stop producing protein. The gene promoter is the genomic DNA sequence immediately upstream of the transcription start site [14], and this specific gene promoter can be attached to the DNA that codes for firefly luciferase and introduced into an organism. The activity of the gene promoter can then be studied by measuring the bioluminescence produced in the luciferase reaction. Thus, the luciferase gene can be used to “report” the activity of a promoter for another gene [13]. This is also known as quantitative visualization of gene expression. 50


BIOLUMINESCENT IMAGING Using bioluminescent reporters is one of the most sensitive methods to visualise molecules, and it has the most cost-effective and simplest procedure. Bioluminescence imaging using luciferase reporters does not need exogenous light illumination, and the luminescence reaction is quantitative. For example, in vitro bioluminescence imaging may use a reporter plasmid vector which includes a promoter sequence, and organelle-targeting luciferase gene sequence. The plasmid is transfected into the target cells and the promoter region proceeds to regulate the expression of luciferase gene. Firefly luciferin added to the medium is oxidised by the expressed firefly luciferase to produce luminescence. The light signal can therefore show the locality or mobility of organelles in living cells. This is measured by special equipment using a CCD photon imaging system (Figure 9) [16].

Figure 9: In vitro bioluminescence imaging for organelles in living cells (Source: [16]) In vivo bioluminescence imaging is most commonly used for cell tracking. Luciferase-expressing cancer cells, immune cells, stem cells, or other types of cells can be imaged in small animals. For example, a reporter plasmid vector consisting of the promoter sequence, a luciferase gene sequence and antibiotic resistance sequence is transfected into the target cancer cells. The promoter region regulates the expression of the luciferase gene in these cancer cells, and luciferase expressed stable cells are then transplanted into a mouse. After allowing time for cancer cell growth, luciferin is injected into the body. Most commonly in these imaging experiments, firefly luciferin enters into the cells through the blood, where it is oxidised by the expressed firefly luciferase to produce light. The light signals help show the location and size of cancer cells in the body, hence providing information about the number and spatial distribution of the cells in the animal [16].

Figure 10: in vivo bioluminescence imaging using luminescent living cells [16]

51


OTHER APPLICATIONS Furthermore, bioluminescence can be used to test for water purity. This can be done by placing genetically modified microorganisms into the water and their degree of luminescence can be used to identify certain toxins in the solvent. Many scientists have studied this and have found that it is particularly effective in determining the presence of arsenic (a common water contaminant) and oil hydrocarbons [19]. Moreover, bioluminescence can be applied in daily life; for instance, Portuguese fishermen have made use of the luminescent secretion of Malacocephalus for illuminating their bait, and their success gives hope that artificial luminous lures could work in the sea [9]. IN THE FUTURE Bioluminescent technology is still developing and scientists are trying to find innovative ways of using bioluminescence reactions. An idea for the future is that instead of using electricity, we could use bioluminescence to provide energy for our light sources. Bioluminescent algae would be stored in a long glass tube in salted water (creating a small ecosystem) and it would be able to lighten the surroundings. Also, many researchers are developing methods to create bioluminescent trees to line streets. This would effectively eliminate the need to place more expensive electrical lamps. Although the biggest challenge now is increasing bioluminescent brightness to provide enough light for drivers, I believe that these ‘glowing’ trees planted on the side of roads will become part of the norm within a few decades [19].

CONCLUSION To conclude this article, I must admit that bioluminescence is a very complicated phenomenon, especially since it requires studies spanning morphology, cell biology, physiology, spectroscopy, organic chemistry, biochemistry and genetics. Therefore, the feeling of confusion is acceptable when reading about bioluminescence; many parts of this phenomenon are still unknown to mankind, thus further research into the topic is required. Although it is hard to understand completely, bioluminescence is undoubtedly one of the most beautiful mysteries in nature that the world has ever seen. Not only has it helped numerous organisms with their survival, but it has already been helping us with advancing science and technology, even having the potential to help save lives. Therefore, I believe that bioluminescence can assist mankind in innovation, and help shape our future significantly.

GLOSSARY

(1) Heat labile: affected by heat, a heat-labile enzyme is denatured by heat (2) Cofactor: mostly metal ions or coenzymes, are inorganic and organic chemicals that assist enzymes during the catalysis of reactions (3) Exogenously: originating from outside an organism (4) Unpalatability: distastefulness 52


BIBLIOGRAPHY [1] “Bioluminescence.” bioluminescence

Merriam-Webster,

Merriam-Webster,

www.merriam-webster.com/dictionary/

[2]“Bioluminescence in the Sea.” Annual Reviews, www.annualreviews.org/doi/full/10.1146/annurevmarine-120308-081028. (website unavailable) Haddock, Steven H.D., et al. “Bioluminescence in the Sea” . pdfs.semanticscholar.org/ ae51/4348866380fa87daf2fdfa72b81c673fd391.pdf [3] “Fluorescence, Phosphorescence, Photoluminescence Differences.” Edinburgh Instruments, www.edinst.com/ blog/photoluminescence-differences/ [4]Monterey Bay Aquarium Research Institute (MBARI). “The Allure of Fluorescence in the Ocean.” YouTube, YouTube, 23 Aug. 2019, www.youtube.com/watch?v=whbeFXFZqiU&feature=youtu.be [5] “Bioluminescence: Chemical Principles And Methods.” Google Books, Google, books.google.com.hk/ books?hl=en&lr=&id=yMLICgAAQBAJ&oi=fnd&pg=PR5&dq=bioluminescence&ots=ITJrdb8S_X&sig=XdKtdlPhIm6YO3lLAGOEoK4wng&redir_esc=y#v=onepage&q&f=false [6] “The Bioluminescence Web Page.” The Bioluminescence Web Page, biolum.eemb.ucsb.edu/ [7] Bhagat, Abhishek. “Harnessing Bioluminescence.” LinkedIn SlideShare, 2 Nov. 2016, www.slideshare.net/ AbhishekBhagat17/harnessing-bioluminescence [8] Harleen Workman McAda. “Bioluminescence.” The American Biology Teacher, vol. 28, no. 7, 1966, pp. 530–532. JSTOR, www.jstor.org/stable/4441402 [9] Nicol, J. A. “Bioluminescence.” Proceedings of the Royal Society of London. Series A, Mathematical and Physical Sciences, vol. 265, no. 1322, 1962, pp. 355–359. JSTOR, www.jstor.org/stable/2414178 [10] Widder, E. A. “Bioluminescence in the Ocean: Origins of Biological, Chemical, and Ecological Diversity.” Science, vol. 328, no. 5979, 2010, pp. 704–708. JSTOR, www.jstor.org/stable/40655873 [11] “Luciferase Reporters.” Thermo Fisher Scientific - US, www.thermofisher.com/hk/en/home/life-science/ protein-biology/protein-biology-learning-center/protein-biology-resource-library/pierce-protein-methods/ luciferase-reporters.html [12] Shimomura, Osamu. Bioluminescence: Chemical Principles and Methods. World Scientific Publishing Co. Pte. Ltd., 2019 [13] MacKenzie, Steven. “Bioluminescence.” The Gale Encyclopedia of Science, edited by K. Lee Lerner and Brenda Wilmoth Lerner, 5th ed., Gale, 2014. Gale In Context: Science, https://link.gale.com/apps/doc/ CV2644030284/SCIC?u=hkharrow&sid=SCIC&xid=92707d85. Accessed 17 Oct. 2019 [14] “Gene Promoter.” Gene Promoter - an Overview | ScienceDirect Topics, www.sciencedirect.com/topics/ medicine-and-dentistry/gene-promoter [15] Ward, L.K. “Meet the Tiny Bacteria That Give Anglerfishes Their Spooky Glow.” Meet the Tiny Bacteria That Give Anglerfishes Their Spooky Glow, 8 May 2018, ocean.si.edu/ocean-life/fish/meet-tiny-bacteria-giveanglerfishes-their-spooky-glow [16] Yoshihiro Ohmiya. Applications of Bioluminescence, photobiology.info/Ohmiya.html [17] Ocean Portal Team. “Bioluminescence.” Smithsonian Ocean, 18 Dec. 2018, ocean.si.edu/ocean-life/fish/ bioluminescence [18] “Latz Laboratory.” Scripps Oceanography, scripps.ucsd.edu/labs/mlatz/bioluminescence/dinoflagellatesand-red-tides/dinoflagellate-bioluminescence/ [19] Konica Minolta. “Living Light: Is There a Future For Bioluminescence Technology?” Konica Minolta Sensing Americas, sensing.konicaminolta.us/blog/living-light-is-there-a-future-for-bioluminescence-technology/ 53


SECTION 2

CONCEPTS IN SCIENCE 5454


Resonance: The Maths 1 INTRODUCTION

Mike Tsoi (Year 13, Peel)

When designing structures, one of the most important issues to consider is resonance. What is resonance? In layman’s terms, resonance describes the gradual increase in the oscillation amplitude of an object over time. When designing a bridge, engineers take resonance into account - if resonance occurs, then the bridge would sway back and forth with increasing amplitude and the oscillation would get more violent. Inevitably, the bridge would snap, as was the case for the Tacoma Narrows Bridge on November 7th 1940. Also, relatively recently in 2016, jumping fans made a football stadium resonate [1]. Fortunately, it didn’t collapse.

2 HOW DOES RESONANCE OCCUR? When the frequency of a periodically applied force (which will now be referred to as the driving frequency) is equal to the natural frequency of the object, the force is most efficient at transferring energy to the object. The energy the object receives is converted to kinetic energy, so the oscillation amplitude increases over time.

3 THE MATHS BEHIND RESONANCE

The oscillation follows a linear second order non-homogeneous differential equation:

(1) where a, b, c, F and ω are constants such that a > 0, b ≥ 0, c > 0, F > 0, ω > 0. ω is the driving frequency, ÿ is the acceleration at any given time, ẏ is the velocity at any given time, y is the displacement at any given time, and t is the time since the oscillation started. We can solve (1) completely, but the solution - and hence the behaviour of the oscillation - depends on b. b is the damping coefficient. Let’s say that the system is undamped, i.e. b = 0. Thus, (1) becomes

(2) For convenience, define

where ω0 is the natural frequency of the oscillating object. To solve (2), we must consider two cases: when ω ≠ ω0 and ω = ω0 3.1 FIRST CASE (ω ≠ ω0) When ω ≠ ω0, the auxiliary equation is

am2 + c = 0

The solutions to the auxiliary equation are imaginary.

55


∴ The complementary function is

y = Asin(ω0t) + Bcos(ω0t)

where A and B are arbitrary constants. The particular integral takes the form

After solving for l and μ, we get l = 0, and ∴ The general solution to (2) is (3) Consider y(0) = 0, then

Consider ẏ(0) = 0, then

∴ The particular solution is (4) Using Desmos, by fixing a, F, and ω0 at a value and varying ω, we can see that as ω approaches ω0, the amplitude (the maximum value of y) increases.

Figure 1: Graphs of the equation given by (4) for several values of ω. F = a = ω0 = 1. 56


In Figure 1: 1. The red curve is where ω = 0.8 2. The blue curve is where ω = 0.85 3. The green curve is where ω = 0.9 4. The purple curve is where ω = 0.95

Figure 2: Graphs of the equation given by (4) for several values of ω. F = a = ω0 = 1 In Figure 2: 1. The red curve is where ω = 1.2 2. The blue curve is where ω = 1.15 3. The green curve is where ω = 1.1 4. The purple curve is where ω = 1.05 Why does the amplitude increase as ω approaches ω0? Consider the denominator of the function .

As ω approaches ω0, the denominator becomes smaller, so the output becomes bigger. 3.2 SECOND CASE (ω = ω0)

In (4), we couldn’t directly substitute ω = ω0, as the denominator would be zero, meaning the output would be undefined. This problem arises because the general solution (3) is only valid for ω ≠ ω0. When ω = ω0, we use the same method in the first case, but this time the particular integral takes the form

57


We see that there is a factor of t.

∴ The amplitude of the oscillation increases linearly with time. Resonance is achieved.

Figure 3: Graph of the equation given by (6) with = 1. The straight line is plotted to show the amplitude increasing linearly over time.

4 HOW TO PREVENT RESONANCE In the introduction, we discussed the harmful effects of resonance. As we saw in the previous chapter, resonance only occurs when the driving frequency is equal to the natural frequency. Engineers can use this fact to prevent resonance. Damping a system (which means taking energy away from the oscillator, such as by friction) reduces the natural frequency and the amplitude. Reducing the natural frequency means the driving frequency will no longer be equal to the natural frequency, thus resonance is avoided. Reducing the amplitude makes the oscillation less violent, so even if the driving frequency somehow decreased and resonance occurred, the amplitude would still be significantly lower. Hence, damping allows structures to withstand the oscillation.

58


5 THE MATHS BEHIND DAMPING 5.1 THE GENERAL SOLUTION

Consider the graph of y = e-x

Figure 4: A graph to show the behaviour of y = e-x over time Either way, due to the e-lt and e-Îźt term, all of the possible complementary functions tend to zero (thus resulting in the behaviour of the system being unchanging) as t tends to infinity. For this reason, the complementary function is referred to as the transient solution [2], which describes the behaviour of an oscillator from the start of oscillation to a state when the behaviour of the system is unchanging, which is called the steady state [2].

59


As mentioned before, the complementary function tends to zero as t tends to infinity. After a certain amount of time has passed, the complementary function becomes negligible, hence the remaining part of the general solution, the particular integral, is referred to as the steady state solution (the state when the behaviour of the system is unchanging). 5.2 THE AMPLITUDE OF THE STEADY STATE SOLUTION Consider (8). The amplitude of the steady state is

Using Desmos, by fixing a, F, and ω0 at a value and varying b, we can see that as b increases, the maximum amplitude decreases and the maximum amplitude occurs at a lower frequency.

60


Figure 5: graphs showing the amplitude of the steady state solution as a function of ω for values of b. F = a = ω0 = 1

6 CONCLUSION To conclude, resonance has a very destructive effect on structures, such as bridges and stadiums. The collapse of bridges in the 20th century revealed the destructive effect of resonance, but even now engineers are sometimes careless with resonance. For example, the Millenium Bridge had to be closed 2 days after it opened because of the large number of people [4] were walking in synchrony with a frequency that matched the resonant frequency, thus causing it to sway dangerously. Engineers had taken into account the driving frequency from the wind and prevented resonance due to wind, but they hadn’t taken into account the frequency of people’s footsteps [5]. Hence, we can never be too careful with resonance.

61


BIBLIOGRAPHY [1] Hunter, Matt. “Stands Shake as Thousands of German Football Fans Jump up and down to Celebrate Goal .” Daily Mail Online, Associated Newspapers, 25 May 2016, https://www.dailymail.co.uk/news/ article-3607671/Rocking-stadium-Stands-shake-thousands-German-football-fans-jump-celebrate-teams-goal.html. [2] Yaseen, Farhan. “What Is the Difference between the Steady-State Solution and the Transient Solution? - Bayt.com Specialties.” Bayt.com, 5 June 2016, https://specialties.bayt.com/en/specialties/q/294816/ what-is-the-difference-between-the-steady-state-solution-and-the-transient-solution/. [3] Weckesser, W. (2003). Notes on the Periodically Forced Harmonic Oscillator. Retrieved from Colgate University. [4] “Swaying Millennium Bridge to Close after Two Days.” The Guardian, Guardian News and Media, 12 June 2000, https://www.theguardian.com/uk/2000/jun/12/2. 5] Strogatz, Steven. “Explaining Why The Millennium Bridge Wobbled.” ScienceDaily, ScienceDaily, 3 Nov. 2005, Strogatz, Steven. “Explaining Why The Millennium Bridge Wobbled.” ScienceDaily, ScienceDaily, 3 Nov. 2005, https://www.sciencedaily.com/releases/2005/11/051103080801.htm.

62


'SMELL AND THE MEMORIES IT BRINGS Hoikiu Wong (Year 12, Wu)

63


The Chicken McNugget Theorem Josiah Wu (Year 12, Churchill)

Figure 1: Who doesn’t like a large box of Chicken McNuggets? (Source: https://www.mcdonalds.com/gb/en-gb/product/20-chicken-mcnuggets-sharebox.html)

PREFACE If you go to any McDonald’s stores in England, you will see that Chicken McNuggets are sold in packages of 6, 9 or 20 (excluding the Happy Meal). But it wasn’t like that 30 years ago; in fact, Chicken McNuggets were only on sale locally in boxes of 9 and 20 when they were first introduced into the menu in 1983. Using only combinations of the 1983 packages, certain amounts of McNuggets can be bought, such as 29 (20 + 9) and 38 (20 + 9 + 9). These are referred to as McNugget numbers. However, there are also some numbers of nuggets which are unpurchasable, such as 12, 25, 37. These numbers are then referred to as non-McNugget numbers. Several mathematicians at the time wondered, “What’s the largest number of McNuggets one cannot purchase (i.e. What’s the largest non-McNugget Number)?” It turns out a solution to this problem was discovered a century before McDonald’s even existed by James Joseph Sylvester in 1882, which was promptly renamed the “Chicken McNugget” theorem. This article will mainly demonstrate the reasonings and proof leading up to this mathematical result.

THEOREM The Chicken McNugget Theorem is as follows: "Given two positive integers m, n such that they are relatively prime*; the largest non-McNugget number, or the largest integer that cannot be expressed in the form of mx + ny ,where x and y are integers ≥ 0, is always mn - m - n" This means that for packages of 9 and 20, the largest number of Chicken McNuggets one cannot buy is 9 x 20 - 9 - 20 = 151 nuggets. Learning this result was mind blowing: this theorem implies that it is possible to purchase any number of Chicken McNuggets above 151 through combinations of these packages.

A ROUGH PROOF A McNugget Number is an integer k such that it is expressible as positive integer multiples of m and n, or in mathematical terms, mx + ny = k {x, y ∈ N, gcd (m, n) = 1, where gcd represents the Greatest * Numbers are relatively prime when they no common divisors other than 1

64


Common Divisor between the two numbers}. In this case, let m = 9 and n = 20 so that we can confirm our results at the end. We’re trying to find the largest non-McNugget Number. Let this number be denoted as kmax. We set up the equation mx + ny = k for k = 1, 2, 3, 4, 5, 6... For each equation, find integer solutions of x and y, such that 0 ≤ y < m. It can be proven that this is possible for all values of k We then plot the corresponding solutions onto the x-y graph. For example, because the solution to 9x + 20y = 1 is x = -11 and y = 5, we plot (-11,5). Repeat this process sufficiently many times, and you should get a graph like this:

Figure 2: Plots of (x, y) for solutions to 9x + 20y = k, where 1 ≤ k ≤ 160. Every ‘red dot’ are points where x ≥ 0. Their coordinates are solutions to mx + ny = k1 where k1 is a certain McNugget Number. This is because their x-coordinates and y-coordinates are larger than or equal to 0 (or x, y ≥ 0) and so it means k1 is expressible as the sum of positive multiples of m and n, by definition. In contrast, every ‘blue dot’ are points where x < 0. It can be proven that any blue dot will not have solutions to mx + ny = k where x,y ∈ N, when k is any non-McNugget number. We just need to find the blue dot that corresponds to the largest k value, which means we have to maximise k. Since we know that x < 0 and 0 ≤ y < m for any blue dot, the maximum value of x is -1, and the maximum value of y is m-1. Hence the coordinates of the point with maximum k = (-1, m-1) It can be proven that such a point always exists for any m & n. Substituting the coordinates back into the equation yields: mx + ny = kmax m (- 1) + n (m - 1) = kmax kmax = mn - m - n which is the desired result.

THE PROOF IN MORE DETAIL

Some extra details are purposely left out in order to promote a brief understanding of the proof. Such details will be elaborated below, making use of lemmas (minor, proven propositions used as stepping stones to a larger result, also known as 'helping theorems' or 'auxiliary theorems'). 65


PROVING THAT A SOLUTION TO x AND y MUST EXIST FOR mx + ny = k SUCH THAT 0 ≤ y < m AND k IS ANY POSITIVE INTEGER For simplicity, we will use m = 9 and n = 20 in the following section replacing every ‘9’ mentioned within the lemmas detailed below with m and every ‘20’ with n will generalise the lemmas for all possible values of m and n where gcd(m,n) = 1. Through Lemma 1.1 and 1.2, there is at least one pair of (x, y) that is the solution to the equation. Let such pair of solution be denoted (α, β). In addition, Lemma 2.1 shows that if (α, β) is a solution, so does (α + n, β - m), (α + 2n, β - 2m), (α - n, β + m) etc. We can generalize this by denoting f such that (α + nf, β - mf) is always a pair of solutions to the equation, for any integer f. Observe that there is always a single value of f such that 0 ≤ β - m < m, no matter what integer value β is. This is because for all β, β can always be rewritten as the sum of a multiple of m plus the remainder – i.e. β = mg + h, where g ∈ Z and 0 ≤ h < m (this is known as the Remainder Theorem). As f can be adjusted, we let f = g, hence y = β-mf = mg + h - mg = h, and so achieving 0 ≤ β - mf < 9 or 0 ≤ y <m. LEMMA 1.1

'There must exist integer solutions to a & b which ma + nb = 1' The following proof is similar to that of Bezout’s Identity. PROOF Let S be denoted as a set of all possible integer combinations of x and y, such that the resulting sum must be positive: S :{z = mx + ny | x,y ∈ ℤ and x > 0} ** Notice S has to be non-empty, because m and n are definitely members within this set, as m = m × 1 + n × 0 and n = m × 0 + n × 1. Because S is non-empty, there must exist a smallest element (since the set is bounded by 0). Let this smallest element be denoted s, where s = ma + nb for some integers a and b as m itself is an element of the set. The following parts are to prove z is divisible by s for all z (which can be notated as s | z). Proof by contradiction: suppose z is not divisible by s, then z = qs + r, where q is the quotient and r is the remainder. Simple rearranging expression and substitution yields:

This means that r can be rewritten as a sum of multiples of m & n, and so r must be an element of S. However, when carrying out a division, the remainder is always smaller than the divisor, so it must also be true that r < s. These two implications lead to a contradiction. We previously proposed that s is the smallest element within the set, but if z is not divisible by s, then r would be the smallest element of the same set. Hence the supposition cannot work logically and so the opposite must be correct. Now we know that s | z, remember that this is true for all values of z. As we are already aware that m and n are elements of S, it follows that s must divide both m and n. Therefore s = 1 as gcd (9, 20) = 1. Finally, as s is an element of set S, 1 can therefore be expressed as a sum of multiples of m and n, and so arriving at the desired result. ** z is a member of the set; the notation | x,y ∈ ℤ means 'given that x and y are integers'

66


LEMMA 1.2

'All positive integers are expressible in the form of mx+ ny, where x,y ∈ Z (x and y are integers which can be negative or positive).'

PROOF From the previous lemma, there exists integer solutions to (a,b) for ma + nb = 1. for Let k be any positive integer. If we multiply both sides of the equation by k, we will get mak + nbk = k ⇒ m(ak) + n(bk) = k. This suggests that k is expressible as the sum of multiples of m and n. As k can be any positive integer, we have therefore verified the lemma. LEMMA 1.3 'If (α, β) is a pair of integer solutions to mx + ny = k (where k is any positive integer), then ( +n, -m) must also be a pair of integer solutions to the same equation.' PROOF This lemma can be visualized by plotting the line of equation directly on an x-y graph; (that is, plotting 9x + 20y = k)

Figure 3: Graph of 9x + 20y = k where k = 29. In this instance, α = 1 and β = 1. Notice the red line is the range of all possible pairs of solutions to (x, y) for mx + ny = k. This means, the coordinates of any point met by the red line is the solution to the equation. We therefore have to prove that (α + 20, β – 9) is one such point. To prove if a point lies on a line, we simply substitute the coordinates of that point into the equation, and see if both sides are equal. As (α, β) is on the line ⇒ mα + nβ = k Plugging in (α + n, β – m) yields:

As m + n + n - m = mα + nβ = k , (α, β) and (α + n, β – m) must lie on the same line, hence they both must be pairs of solutions to the original equation. This lemma suggests that one could generate another pair of solutions from a known pair of solutions of the equation mx + ny = k. It also implies that there are infinitely many possible pairs of integer solutions (regardless of their signs) to the equation, as the same principle can be applied over and over again to every pair of solutions.

67


PROVING THAT IF (x,y) IS A BLUE DOT, THEN THERE ARE NO INTEGER SOLUTIONS TO mx + ny = k WHERE x,y ∈ ℕ. For a pair of solutions of (x, y) to yield a McNugget Number, it must lie on the line mx + ny = k and be in the first quadrant. We previously proved that there must exist at least one integer solution to y in mx + ny = k, which 0 ≤ y < n. We can rephrase this as saying that out of all the points with integer solutions (derived from Lemma 1.3) on the line mx + ny = k, there must be a point on the x-y graph such that 0 ≤ y < n. Let such point have coordinates (α, β) and be denoted P. If (α, β) lies in the first quadrant (P is a ‘red dot’), then positive integer solutions to x and y exist, so it is purchasable in packs of m and n, hence k will be a McNugget number. The problem is that if it doesn’t (P is a ‘blue dot’), how do we know that the neighbouring points derived from Lemma 1.3 (such as (α + n, β - m), (α + 2n, β - 2m), (α - n, β + m), etc) also do not lie in the first quadrant? We can deduce that this must be true, because if P is not in the first quadrant, then the neighbouring points on the left of P will have negative x coordinates, and the ones on the right of P will have negative y coordinates. Hence, for all points on the line, there are no points in the first quadrant, and so there are no possible solutions. This can be visualised from Figure 4 below:

Figure 4: (Left) Scenario if P is a ‘red dot’ (Right) Scenario if P is a ‘blue dot’ PROVING THAT A BLUE DOT WITH COORDINATES (-1, m-1) MUST EXIST FOR ANY VALUES OF m AND n The rough proof states that by maximizing k, the coordinates of the blue dot responsible for kmax = (-1, m - 1). But how do we guarantee that such a point will exist for all possible values of m and n? In this section, I want to demonstrate that there is a pattern in the successive plots of (x, y) for mx + ny = k. For the sake of clarity of the proof, we define a string as a collection of coordinates of m points on the graph. Each string consists of points with k-values from 1 to m, m+1 to 2m, 2m+1 to 3m and so on. We shall look at the coordinates of (x, y) from the data used to generate the graph in Figure 5. Note that the set of points is computed under the basis that m = 9 and n = 20.

68


Figure 5: Distribution of points when m = 9, n = 20 We can then compare this to another set of points generated from different values of m and n. Here, m = 12 and n = 17.

Figure 6: Distribution of points when m = 12, n = 17 Observe that from both Figure 5 and Figure 6, every string contains an identical sequence of y coordinates as the others. Furthermore, notice that the sequence of x coordinates increments by one for every 9 points in Figure 5, and every 12 points in Figure 6. We can generalize the trends within the data and prove them. Let solutions to x and y be denoted xk and yk for every value of k. For example, x1 and y1 represents the solutions to x and y for mx + ny = k, where k = 1, respectively. We wish to prove that all strings have the same sequence of y coordinates, or in mathematical terms: y1 = ym+1 , y2 = ym+2 , y3 = ym+3… etc. ⇒ yk = ym+k for any k ∈ N. and that the sequence of x coordinates increments by 1 for every string. This can be reinterpreted as: xm+1= x1 + 1, xm+2 = x2, xm+3 = x3… etc. xk + 1= xm+k for any k ∈ N.

69


The proof for this is to consider substituting (xk, yk) into the equation. We now have that mxk + nyk = k Consider adding m to both sides:

mxk+ nyk + m = m + k m(xk + 1)+ nyk = m + k

Because the RHS equals to m + k, the we can further conclude that: m(xk + 1)+ nyk = m + k = mxm+k + nym+k Hence, by comparing coefficients, xk + 1 = xm+k and yk = ym+k , as desired. The result of this proof is that as k increases, every string is on the graph such that they are one x-coordinate apart from each other (this is evident from both Figure 5 and Figure 6). Therefore, for any values of m and n, there must exist a ‘blue dot’ which has coordinates (-1, m-1), which yields the largest non-McNugget Number.

CONCLUSION You might be wondering about the applications of this theorem. Unfortunately, there are none (yet). In fact, I stumbled upon this theorem when I happened to come across a list of “unusual math theorem names”. Such names include: • The Ugly Duckling Theorem • The Hairy Ball Theorem • The Law of the Unconscious Statistician and others which are too inappropriate to appear in this article. Nevertheless, I feel that the original problem linked to the theorem is intriguing, not only because it can be easily understood, but it also raises a question we have not thought about. This theorem therefore demonstrates the spirit of mathematics: to find answers to unresolved questions, purely out of curiosity.

BIBLIOGRAPHY • https://artofproblemsolving.com/wiki/index.php/Chicken_McNugget_Theorem • https://www.jstor.org/stable/2369536?origin=crossref&seq=8#metadata_info_tab_contents • Sylvester, James Joseph. “On Subvariants, i.e. Semi-Invariants to Binary Quantics of an Unlimited Order”. Published by the American Journal of Mathematics, 1882 • http://matwbn.icm.edu.pl/ksiazki/aa/aa65/aa6545.pdf • https://proofwiki.org/wiki/B%C3%A9zout%27s_Lemma/Proof_4 [Proof for Lemma 1.1] • https://en.wikipedia.org/wiki/Coin_problem • https://www.coursehero.com/forgot_password.php/file/38580951/SMPF-Chicken-nuggets-5pdf/ • https://proofwiki.org/wiki/B%C3%A9zout%27s_Lemma – Proof for Bezout’s Identity • https://www.reddit.com/r/math/comments/59sqad/what_theorem_has_the_funniest_name/ - List of Unusual Math Theorems’ Names • https://www.youtube.com/watch?v=vNTSugyS038&t=6s

70


Blackbody Radiation and Planck’s Constant Edward Wei (Year 10, Peel)

By the late 19th century, it was the general consensus that nothing more could be discovered in physics. Physicists could calculate the motion of material objects using Newton’s laws of classical mechanics, and describe the properties of radiant energy using Maxwell’s mathematical relationships. The universe appeared to be a simple and orderly place, containing matter (which consisted of particles that had mass and whose location and motion could be accurately described) and light (which was viewed as having no mass and whose exact position in space cannot be fixed). Matter and energy were considered distinct and unrelated phenomena. However, there were several contradictions that continued to puzzle classical physicists. The one we will be looking at is known as the “ultraviolet catastrophe”.

Figure 1: A diagram of an electromagnetic wave (Source: Clinuvel)

Light or electromagnetic radiation is a form of energy which travels in a wave, as shown in Figure 1 (Note: this is a simple depiction, but for our purposes it is enough). The crest is the tip of a wave; the trough is the bottom. The wavelength is the distance between the crests of 2 waves (measured in meters), and the amplitude is the distance between the crest and the trough divided by 2 (measured in meters). The wave period or frequency is the time taken to complete one cycle - as shown in Figure 1. It is measured in 1/time in seconds or Hertz. The frequency and wavelength of a wave are inversely proportional to each other.

Figure 2: Diagram of the electromagnetic spectrum. (Source: Mini Physics)

The electromagnetic spectrum categorizes light into several categories as its frequency/wavelength varies. Higher frequencies or shorter wavelengths equate to higher energy and vice versa. The relevant part is the visible section, which is the only part of the entire spectrum that we can see. Our eyes are not sensitive enough to detect infrared whilst the lenses of our eyes block out ultraviolet (it is harmful to cells). We are able to see objects because they either emit light themselves, or they absorb light and reflect it in all directions, and some of the light reflected just so happens to enter our eyes.

71


The contradiction that continued to puzzle classical physicists came in the form of 'black bodies'. Black bodies are idealized physical objects that do not simply reflect the light that they absorb: they absorb all frequencies of electromagnetic radiation that falls upon them and they can also emit radiation of any wavelength, but it is not usually in the visible light range, so we do not see it. However, the highest intensity wavelength emitted depends on temperature (Figure 3). The radiation emitted from it is the kind released by any object with temperatures above absolute 0 (-273˚C). We do not usually see this radiated energy: at ambient temperature, the wavelength of the emitted radiation falls beyond the visible light spectrum, such as infra-red wavelengths. This is why we can feel hot things, even when we don’t come into direct contact with them or see any glow. Let’s take a real life example: molten iron glows red because while most of the energy radiated from it is within the infra-red spectrum, a portion of the energy has a high enough frequency to be visible red light. As objects get hotter, they emit radiation with shorter and shorter wavelengths, so it is within the visible spectrum or even shorter. Classical physicists use the equation I = 2f2kBT/c2 or the Rayleigh–Jeans Law to approximate the total energy in terms of electromagnetic radiation as a function of wavelength from a black body at a given temperature, where f is frequency, kB is Boltzmann’s constant, T is the temperature (in Kelvin) and c is the speed of light (in metres per second).

Figure 3: A graph comparing radiation intensity predicted by the Rayleigh-Jeans Law and the actual data. (Source: chem.libretexts.org) The equation predicts that as the wavelength decreases, radiation intensity should increase without a limit at any temperature, as shown by the dotted line. It does not explain the sharp decrease in the radiation intensity emitted at shorter wavelengths. In fact, it does such a poor job of describing what really happens that it was dubbed the “ultraviolet catastrophe”. This contradiction was solved when German physicist Max Planck proposed that the energy of electromagnetic waves was quantized. We say something is quantized when the number of possible values is limited to certain discrete magnitudes, which are multiples of a constant value (quanta). Although quantization may be an unfamiliar concept, it’s all around us. For example, US money is an integral multiple of pennies. Musical instruments such as the piano can only produce certain musical notes like C or F sharp. Even electrical charges are quantized - ions can have a charge of 1 or -2, but not 1.46.

72


This was the equation Planck derived:

En = nhv

E = the energy in joules and represents the total amount of energy radiated from a black body (although generally E now represents the amount of energy in a light beam) n = the number of protons in the light beam, a very large integer. h = 'Planck’s constant', 6.626 x 10-34 joule-seconds v = the frequency of the light wave in hertz Planck explained that at low temperatures, radiation with relatively large wavelengths (low frequencies) is emitted. As temperature increases, the emission of radiation with shorter wavelengths (higher frequencies) becomes more probable. This is why hotter black bodies are able to emit electromagnetic radiation at higher maximum intensities. However, at any given temperature, objects would rather emit many low-energy photons over a single high energy photon of equivalent total energy. The 'discovery' of the quantized nature of electromagnetic radiation was an incredible breakthrough. In the early 20th century, physicists were so sure that they had discovered everything that one respected member even said, “There is nothing new to be discovered in physics now. All that remains is more and more precise measurement.” Lord Kelvin 1 Planck’s discovery led to a new branch of Physics: Quantum Physics, opening a vast world that we have barely scratched the surface of. This discovery fundamentally changed the way physicists perceived the world. In the past, physicists viewed mass as a “thing” with a definite location and velocity, like a car travelling west at 10m/s. However, we now know that the concepts of location, velocity and even existence itself blur at the atomic and subatomic level. For reasons we do not understand, electrons exist everywhere at once, with the probability of them “existing” in certain areas being higher than other areas. This fundamental difference in the way mass exists at the macro and the micro still befuddles physicists today.

AFTERNOTE Max Planck didn’t actually know why the energy of electromagnetic waves was quantized. He derived this equation according to the graph of real data and through the assumption that energy carried by electromagnetic radiation was quantized. Similarly, he was able to guess that n is a very large integer, but was unable to identify its exact value. It was only explained later on when Albert Einstein discovered that light could have both wavelike and particle-like properties. This is because the energy that can be carried by a single photon (light particle) is fixed. The equation above actually calculates the energy within a light beam, which is why n is a large integer, as there are many photons in a light beam. The equation that calculates the energy of a single photon is Ephoton = hv. The radiation comes in discrete packets, because you cannot have half a photon of energy. Now you may be asking, why does Planck’s equation conclude that energy is quantized? Sure, h is a constant, but isn’t v (frequency) an infinitely variable quantity? If so, how can the energy of a photon have only certain values? Yes, the spectrum of “allowed” energies of a photon is continuous i.e. a photon can have any energy, but for a given frequency, the energy exchange can only take place in jumps of hv. For example, if object A’s initial energy is equal to hv and we increase its energy by giving it another photon of energy with equal frequencies, object A has EInitial + hv worth of energy. It is impossible for object A to have a final energy in between EInitial and EInitial + hv. This is ironic, as Lord Kelvin actually contributed to the expansion of physics as a science!

1

73


BIBLIOGRAPHY Khan Academy https://chem.libretexts.org/Bookshelves/General_Chemistry/Map%3A_Chemistry_-_The_Central_ Science_(Brown_et_al.)/06._Electronic_Structure_of_Atoms/6.2%3A_Quantized_Energy_and_ Photons https://www.youtube.com/watch?v=GgD3Um_f0DQ https://www.physicsforums.com/threads/how-can-energy-be-quantized-with-e-hv.338298/ https://www.youtube.com/watch?reload=9&v=pmM28gQZTXc https://en.wikipedia.org/wiki/Rayleigh–Jeans_law

74


The Tale of:

The Golden Ratio, Fibonacci Numbers and Lucas Numbers Helen Ng (Year 9, Gellhorn)

1 INTRODUCTION Do you ever just sit in a maths classroom and wonder who invented all these equations for us to remember off by heart? There must have been moments where you wondered who in their right mind would derive all these crazy equations just to entertain oneself - that must be insane. Unfortunately (or fortunately), there is no single man (or woman) to blame - nature has built itself with utmost consideration, and mathematicians have been lucky enough to catch a glimpse of the beautiful patterns, begotten not created.

2 GOLDEN RATIO Unfortunately, this ratio is not actually made of gold so it does not make us rich - I sure hope it did though! However, it sure brings us a lot of valuable knowledge and that is priceless. 2.1 HISTORY OF THE GOLDEN RATIO It is unknown when the golden ratio was discovered, but it holds significance throughout ancient Egyptian and Greek architecture [1]. Phidias (500-432 BC) was the first person observed to have applied phi to sculptures in the Parthenon. It is also suspected that the Egyptians used phi, Φ (as well as pi, π) in the building of the Great Pyramids (Figure 1). Euclid (365-300 BC), in his book “Elements”, referred to the golden ratio as “dividing a line in the extreme and mean ratio”, and this number is also linked to the construction of a pentagram (that doesn’t mean we summon the demons with phi, folks).

Figure 1: The Great Pyramids of Giza (Source: Richard Nowitz/Getty Images)

2.2 APPLICATIONS OF THE GOLDEN RATIO Nature never fails to provide mesmerising sights no matter where you are. Patterns on a leaf, the eye of a storm, the arrangement of the center of a sunflower... [2]. These aesthetics have inspired artists through centuries to produce artwork that never ceases to amaze. The secret behind all of this? The golden ratio. During 1509, Luca Pacioli wrote a book that refers to the golden ratio as the “divine proportion.” Leonardo Da Vinci then drew it out, and later called it the sectio aurea, which means the “golden section”. The golden ratio is also found in Baroque music, composed during the time period spanning approximately 1600-1750 by composers such as J.S. Bach. Bach is known for his neat and extremely symmetrical pieces, and in his piano two part inventions, the structure is especially prominent. Take the number of measures in a piece and multiply it by 1/Φ (0.618) - the product is the position of the measure where the piece reaches its climax.

75


2.3 DERIVING THE GOLDEN RATIO MATHEMATICALLY In a golden rectangle (Figure 2), the sides are denoted by a and (a+b), a being the side length of the biggest possible square within the rectangle and the rectangle of dimension a+b by a (blue + pink area) is similar to the rectangle of dimension a by b (pink area).

Figure 2: A golden triangle [3] This expression means any two numbers are in golden ratio when they satisfy this equation for a > b. Now, we label the ratio a/b to be a greek letter Φ “phi” (not to be confused with the Vietnamese dish pho!!) and rewrite the equation, we get:

This is the property of the golden ratio (1.618…). This number is the same as its reciprocal plus 1. Now, back to the rectangles. By rearranging the formula above with simple algebra, we get two more equations:

We know for a fact that b ≠ 0 because it is a geometric length, therefore we can infer that

Φ2 - Φ - 1 = 0

For now, replace Φ with x in the equation for easier understanding while solving the equation.

x2 - x - 1 = 0

By using the quadratic formula, the real roots (another word for “solutions”) for the equation are:

76


3 FIBONACCI NUMBERS (Random fact: Leonardo Fibonacci helped a lot in spreading the Hindu-Arabic numeral system [base10] in Europe!) Following on from x2 - x - 1 = 0 let the real solutions x = 1.618 and x = – 0.618 be denoted as a and b respectively, where a > b. Let’s focus on a for now (as it is the Golden Ratio), and since it is the root of the equation, we know that:

α2 - α - 1 = 0 α2 = α + 1

We start to wonder what will happen if we went on and “added” more powers towards α. 3.1 INVESTIGATING POWERS OF ROOT α

α raised to the power of 3 α3 = α2 · α = (1 + α)α = α2 + α = 2α + 1

α raised to the power of 4

α4 = α3 · α = (2α + 1)α = 3α + 2

Brain Teaser 1: Try spotting the pattern! Try it out for yourself and fill in the table below. Hint: they are all integers (answers in section 7.1) αn = Aα + B n

A

B

0

0

1

1

1

0

2

1

1

3

2

1

4

3

2

5

______

______

6

______

______

7

______

______

8

______

______

9

______

______

...

...

...

nth term

______

______

Extension: find the general term!

77


3.2 DEFINITION OF FIBONACCI NUMBERS After (hopefully not) peeking at the answers, you will find that both sequences A and B follow a pattern: a new term is generated by adding the two terms that come before. This “pattern” is exactly the property of the Fibonacci sequence. Furthermore, the sequence starts with two numbers, 0 and 1. Putting this into more mathematical terms, the sequence can be expressed as:

RECURRENCE RELATION (or RECURSION)(1) (see section 7.2) Fn+1 = Fn + Fn - 1 (for F ≥ 1) With INITIAL CONDITION F0 = 0, F1 = 1 3.3 BINET’S FORMULA: EXPLICIT GENERAL TERM OF FIBONACCI SEQUENCE Linking the definition of Fibonacci numbers back to the sequence derived from powers of root , we find that the general term of sequence αn is no more than

Remember our other friend, root β? The same equation can be applied to get the equation:

Now that we have two equations, eliminate Fn-1 then we get:

Substitute the numerical values of α and β to reach the final equation:

*It is strongly recommended that you try out the steps yourself - who wouldn’t love themselves some algebra!* Jacques Philippe Marie Binet was a French mathematician who derived this closed form formula for the general term of Fibonacci [4]. Interestingly, although the formula involves a lot of surds, the final answer will always be a positive integer.

78


4 LUCAS NUMBERS Consider the roots of the quadratic at the very start of Section 3:

x2 - x - 1 = 0, x = α or x = β According to the formula for the sum and product of roots of a quadratic equation ax2 + bx + c, the sum and product of roots are -b/a and c/a respectively so we know that:

α + β = 1, αβ = -1 Now, as young, diligent and proactive mathematicians, of course we wonder what will happen when we, you know, mess with these equations. We already have α + β = 1, now by adding powers α and β separately, we get the equation:

(By the way, solving these equations requires knowledge of the basic 'algebraic identities', which tell us how mathematical expressions relate to each other e.g. a + 0 = a. Some really useful stuff; they will definitely help you in the long-run for maths!) Brain Teaser 2: Investigating Lucas numbers n

αn + βn

n

αn + β n

0

2

6

_____

1

1

7

_____

2

3

8

_____

3

4

9

_____

4

_____

...

...

5

_____

nth term

_____

79


Congratulations if you’ve made it this far without being tempted to peek at the answers! We can see that these numbers follow the same property as Fibonacci numbers: each term is the sum of the two previous terms. Using the recurrence relation from earlier on, we can express this as:

Fn + 1 = Fn + Fn - 1 (for F ≥ 1) With INITIAL CONDITION F0 = 2, F1 = 1 This sequence is called the LUCAS NUMBERS, named after French Mathematician Édouard Lucas who was famous for studying the Fibonacci sequence. I personally enjoy thinking of Lucas numbers as the dark twin of Fibonacci numbers - Lucas numbers are just as valuable, but don’t get much attention. Is there an expression that denotes Lucas numbers with α and β? From the table, we can infer that Ln = αn + βn and we know that α + β = 1, so for symmetry, we can denote:

5 THEIR RELATIONSHIPS 5.1 FIBONACCI NUMBERS AND THE GOLDEN RATIO The recursive relation of Fibonacci is shown with Fn+1 = Fn + Fn-1 , when we divide with both sides we get:

We assume that the ratio of consecutive Fibonacci numbers will approach a fixed numerical value as the value n approaches infinity(3). Now, define another variable and rewrite the equation as:

When a number is equal to its reciprocal + 1, it is in the golden ratio. Therefore, this final expression tells us that as the Fibonacci sequence goes on to the nth term, the ratio between two consecutive numbers will approach the golden ratio Φ. Note that this statement does not just apply to the Fibonacci sequence, any sequence which generates terms from the sum of the previous two terms also fulfill this statement. Therefore, consecutive Lucas numbers are in golden ratio too! [5]

80


Another way that Fibonacci numbers are connected to the golden ratio can be neatly represented by Figure 3, which shows a 'golden spiral' encased in a golden rectangle [6]:

Figure 3: A golden spiral encased in a golden rectangle [6] Surely we have all seen this spiral around at some point in our lives in Mona Lisa conspiracy videos, but what exactly does it mean? In this golden rectangle (yes, remember our friend from the start?), there are squares with side lengths of Fibonacci numbers and quarter arcs are drawn within them. No matter how many more squares we add to the rectangle, the base and height of the rectangle will always be two consecutive numbers that fall somewhere within the Fibonacci sequence, therefore we can deduce that:

Area of golden rectangle = FnFn + 1 Thinking from another perspective, the area of the rectangle is no more than all the areas of the little squares made up of side lengths of Fibonacci numbers added up!

Areas of little squares = Fn2 Simultaneously,

This diagram of a rectangle proves that the summation of squared Fibonacci numbers up to the nth term is just the nth term itself multiplied by the term that comes after. 5.2 FIBONACCI AND LUCAS NUMBERS As mentioned before, the Lucas numbers are a lot like Fibonacci numbers - so are they, in any way, connected? The answer is yes, and in many ways. Below are two ways of how they are connected (because as a Y9 student, I am still incapable of comprehending lots of great works by great mathematicians. However, if you think I have sparked your passion to become a young, diligent and proactive mathematician, feel free to look up more ways on how they are connected!!)

81


1. Consider the general terms of the Fibonacci and Lucas sequences:

We kow that (α + β) is 1, so expand the numerator and we get:

When Fibonacci and Lucas numbers of the same position are multiplied, the product is the Fibonacci number that is twice the position; e.g.: F4L4 = 3 · 7 = 21 = F8 2. Observe the table below: n

1

2

3

4

5

6

7

8

9

10

Fn

0

1

1

2

3

5

8

13

21

34

Ln

2

1

3

4

7

11

18

29

47

76

The highlighted boxes show a pattern that appears throughout the sequence. For every other Fibonacci number, the Lucas number “in the middle” is the sum of the two Fibonacci numbers. This can be expressed as:

Ln = Fn-1 + Fn+1

6 CONCLUSION That wraps it up for the golden ratio, Fibonacci and Lucas numbers! As unconvincing as this sounds, coming from a big nerd that has taken her time to write such a long article about a sequence of numbers adding on top of each other, maths really makes us think about how we take day to day objects in nature for granted. Who knew the arrangement of flower petals (Figure 4) could lead to such elaborate calculations? I hope this article has inspired you to take more time to slow down and observe the world around us - sometimes just an extra glimpse can lead to magnificent discoveries.

Figure 4: The petals of a rose and the golden spiral (Source: https://www.juevesfilosofico.com/the-golden-ratio-2)

82


7 SOLUTIONS + NOTATIONS APPEARING IN THIS ARTICLE 7.1 SOLUTIONS Brain Teaser 1

Brain Teaser 2

αn = Aα + B

n

αn + βn

n

A

B

0

2

0

0

1

1

1

1

1

0

2

3

2

1

1

3

4

3

2

1

4

7

4

3

2

5

11

5

5

3

6

18

6

8

5

7

29

7

13

8

8

47

8

21

13

9

76

9

34

21

...

...

...

...

...

nth term

Ln =

nth term

Fn

Fn-1 *

*There is not really an explicit general term for column B, however it is notable that this column has the exact same terms as column A, except that everything is shifted down by one row (row 1 being the exception, because anything to the power of 0 is 1) so the definition Fn-1works for now. 7.2 NOTATIONS THAT APPEAR IN THIS ARTICLE The recurrence relationship means that “one or more initial terms are given; each further term of the sequence or array is defined as a function of the preceding terms” [6]. Into even simpler words, a term in a sequence is obtained by operating with the term(s) before. One pertinent example would be factorials, because factorials are defined by n! = n(n - 1)! (for n ≥ 1), with the initial condition 0! = 1. If you are interested, feel free to read through a very helpful article here [8].

(1)

In order to solve two unknowns, we usually need a minimum of two equations so we can eliminate at least one variable. Therefore, when we have the variable Fn-1 in two equations, we eliminate it by the method of solving simultaneous equations. There are a few methods of solving simultaneous equations, but the most common ones are elimination and substitution processes. (2)

83


When a variable approaches a number (usually very big or small), this notation is used. This means that variable x is approaching number y. For example, when we see , we know that n is getting bigger, approaching infinity. There is a function afterwards to indicate what exactly will happen as n approaches infinity: This equation is saying that when n approaches infinity, k will become 1 + its reciprocal, which means that it is in golden ratio. (3)

Here, we introduce a notation called “summation”, which means adding all terms in a function up to a chosen point. The number below (e.g, 1 in ) indicates the “starting point”, as opposed to the number/expression above indicating to the “stopping point” (e.g. n in ). Putting it all together, means adding from the 1st term to the nth term in a function. (4)

BIBLIOGRAPHY [1] Meisner, Gary. “History of the Golden Ratio.” The Golden Ratio: Phi, 1.618, 26 Aug. 2017, https:// www.goldennumber.net/golden-ratio-history/. [2] Hegde, Pratik. “Golden Ratio : What It Is And Why Should You Use It In Design.” Medium, Prototype, 4 Jan. 2018, https://blog.prototypr.io/golden-ratio-what-it-is-and-why-should-you-use-it-indesign-7c3f43bcf98. [3] “Relationship of Sides in a Golden Rectangle.” Wikipedia, Wikipedia, 29 June 2011, https:// en.wikipedia.org/wiki/Golden_ratio#/media/File:SimilarGoldenRectangles.svg. [4] “Jacques Philippe Marie Binet.” Wikipedia, Wikimedia Foundation, 22 Sept. 2019, https:// en.wikipedia.org/wiki/Jacques_Philippe_Marie_Binet. [5] Chasnov, Jeffrey R. Hong Kong, https://www.math.ust.hk/~machas/fibonacci.pdf [6] “Fibonacci spiral” Wikimedia, 3 May 2017, https://en.wikipedia.org/wiki/Golden_spiral#/media/ File:FibonacciSpiral.svg. [7] “Recurrence Relation.” Wikipedia, Wikimedia Foundation, 26 Oct. 2019, https://en.wikipedia.org/ wiki/Recurrence_relation#Definition. [8] “Recurrence Relations” math.ust.hk, https://www.math.ust.hk/~mabfchen/Math2343/Recurrence.pdf

84


'SOLAR SAILS'

Joy Chen (Year 9, Gellhorn) 85


The Atom Through Time Pierce Duffy (Year 13, Sun)

Democritus of Abdera was a Greek philosopher who was known as the “laughing professor” due to his constant cheerfulness and belief that the main goal of life should be happiness. He lived in the 4th century BC and without conducting any experiments, Democritus and his teacher Leucippus were able to theoritise the 'atomic theory' with nothing but deduction and observational skills. He proposed that everything in the world was made up of what he called ‘atomos’, which is a Greek word meaning uncuttable or indivisible; the word ‘atom’ that is ubiquitous today stems from Democritus. In his theory, atoms were the smallest thing in the universe and have existed forever. They are not alive, cannot be destroyed, are constantly moving, infinite in number, and can bind to other atoms. Substances differ according to shape, size and structure of the atoms. Although some of what Democritus theorised was incorrect, a surprisingly large amount was actually correct. What is more remarkable is the fact that he was theorising this all in the 4th century BC, a time when the first crossbow was being invented, the first aqueduct built and the leading scientific belief was that everything was made up of four elements: Earth, Fire, Water and Air! For almost 2200 years this idea of atoms lay cold and untouched. That was until John Dalton, an English Quaker Christian born in 1766, took an interest in the atomic theory. Fascinated by science from a young age, he was just 26 years old when he became a teacher of mathematics and natural philosophy at Manchester’s New College, where he carried out his research on his own atomic theory. His earlier work at the college on gases led him naturally to work more formally on this idea of atoms. In 1808 Dalton published his book, A New System of Chemical Philosophy. In it he borrowed Democritus’ term ‘atom’, and introduced his idea that all atoms of an element are identical, but atoms of different elements have varying atomic weights. This simple idea is the foundation to all of modern Chemistry and it is why Dalton is often called the ‘father of atomic theory’. In his book he shared his belief that atoms are indestructible and the smallest particle. His work alone sparked a wave of interest in atoms, and marked the birth of modern Chemistry. J.J. Thomson was next to change the atomic scene forever, disproving both Dalton’s and Democritus’ beliefs that the atom was the smallest particle. In 1894 Thomson began his experiments with cathode rays tubes filled with gases at low pressures, with an anode and cathode at either end. From the experiments he was able to determine the electrical charge to mass ratio of the particles. He found this ratio was 1000 times smaller and 1800 lighter than a hydrogen atom and did not change even when he used different gases. This effectively disproved that hydrogen was the smallest particle. It also led him to believe there was a particle even smaller than the atom that has a negative charge, and so the ‘corpuscle’ was born, or what we today call the electron. However, this went against everything that was known about the atom at the time: how could there be anything smaller than the atom? Thomson proposed these negatively charged electrons were distributed in a uniform sea of positive charge, which would later be known as the ‘plum pudding model’ (Figure 1).

Figure 1: A depiction of J.J. Thomson’s ‘plum pudding’ model (Source: Wikimedia Commons) 86


Although this model was disproven a few years later, it was a massively important step in atomic theory, as it questioned the atom’s position as the smallest particle in the universe. Thomson encountered difficulties in proving his 'plum pudding' model, in particular demonstrating that atoms have uniform positive charge. The Geiger-Marsden experiment, also known as the 'gold foil experiment', measured the scattering pattern of alpha particles (particles with a positive charge) when shone at a thin sheet of gold. The two scientists expected all the alpha particles to pass through the atom, as according to the 'plum pudding' model, the atom has a uniform positive charge which would not be large enough to repel an alpha particle. The majority of the alpha particles passed through as expected, but a very small fraction of the particles (around 1 in 20,000) were scattered at angles greater than 90º (Figure 2).

Figure 2: The Geiger-Marsden experiment (Source: Pearson) If the atom’s charge was uniformly distributed throughout the atom, then the charge of an atom would not be sufficient to cause the kind of repulsion that would scatter the alpha particles to such a degree. In his 1911 paper, Rutherford presented his own model of the atom, referencing the unexpected results from the gold foil experiment. Rutherford also described a very small (less than 1/3000th of the atomic diameter), and highly charged centre, which would later be known as the nucleus. It took one more scientist, James Chadwick, to prove that the nucleus of an atom consisted of two different particles: protons with a positive charge, and neutrons with no overall charge. At this point in time, scientists now knew that the atom was made up of smaller subatomic particleselectrons, protons and neutrons, but what was still unknown was the exact arrangement of these inside the atom. The Rutherford model was a good start but it was refined by Niels Bohr using subsequent experiments a few years later. He built upon the idea of a nucleus in the centre of the atom, and theorised that electrons orbit around this centre in a solar system-like model. To this day, Bohr's model of the atom is taught in schools up until Year 11, and is what most people think of when someone says the word ‘atom’. Although Bohr’s model fixed problems relating to the Rutherford model, it did have its own shortcomings, such as the assumption that electrons have a known radius and orbit. In an attempt to keep his model alive, adjustments to the model were made, which proved to be even more problematic. In the end, it would be superseded by quantum theory. The model that is accepted today as the most accurate model is called the electron cloud model. Heisenberg's 1927 uncertainty principle and Erwin Schrödinger’s 1926 work on the wave function and 87


electrons revealed that in the world of quantum mechanics, it is impossible to determine the exact position and momentum of an electron at any given time. With this information, the electron cloud model shows that electrons do not orbit the nucleus, but rather they move about in a region around the nucleus. You cannot predict the exact position of these electrons, only a cloud-like region where electrons are most likely to be found, hence the term ‘electron cloud’. Regions with the highest probabilities of an electron being present are known as electron orbitals, with each orbital having a different shape, amount of energy and number of electrons (Figure 3). This theory came about in the 1920s and has stood the test of time for over 90 years. Some might say that this must be it: the most accurate, most correct model of the atom.

Figure 3: A comparison between Bohr’s model of the atom and electron cloud model proposed after discoveries in quantum mechanics (Source: TracingCurves) However, I would remind them that for over 2000 years, most people believed that the world was made up of the 4 elements and did not accept Democritus' atomic theory. Perhaps in the future, the electron cloud model that we currently 'believe' in will be rendered inaccurate... only time will tell.

BIBLIOGRAPHY Williams, Matt. “What Is The Plum Pudding Atomic Model?” Universe Today, 6 Dec. 2016, www. universetoday.com/38326/plum-pudding-model/. Ingraham, Paul. “John Dalton.” Biography.com, A&E Networks Television, 15 May 2019, www. biography.com/scientist/john-dalton. Williams, Matt. “What Is Bohr's Atomic Model?” Universe Today, 6 Dec. 2016, www.universetoday. com/46886/bohrs-atomic-model/. Doud, Theresa, director. The 2,400 Year Search for the Atom . YouTube, YouTube, 2014, www.youtube. com/watch?v=xazQRcSCRaY. "Discovery of the Electron." PBS. PBS, n.d. Web. 08 Jan. 2014. Hawkins, Stephen W. A Brief History of Time. New York: Bantam, 1988. Print.

88


SECTION 2

APPLICATION OF SCIENCE 89 89


Role of Natural Gas in Energy Transition Diya Handa (Year 12, Anderson)

Natural gas is a fossil energy source that is found beneath the surface of the Earth; consisting of multiple compounds, such as methane, carbon dioxide, water vapour and other natural gas liquids (NGL). It is considered to be the cleanest fossil fuel. Additionally, natural gas is odourless and colourless. When burned, it releases carbon dioxide, water vapour and a minimal amount of nitrogen oxides.

HOW IS NATURAL GAS FORMED? Natural gas was formed from plants, animals and microorganisms millions of years ago; it is formed underground due to the intense conditions. The organic matter from the decomposition of plants, animals and microorganisms is formed on layers of soil, sediment and rocks. As organic matter decays and becomes more rooted into the Earth’s crust, the temperature gets higher. These conditions of high compression and temperature cause the carbon bonds in the organic matter to break, causing the release of thermogenic methane: natural gas. Methane (CH4) is Earth’s most abundant organic compound and is made up of hydrogen and carbon. It does not necessarily have to be formed underground; instead, it can also be formed by microscopic organisms - methanogens. Methanogens are present in the intestine of mammals, and low oxygen areas near the surface of the Earth. The process of methanogens creating natural gas is known as methanogenesis. Most biogenic methane escapes into the atmosphere as gas rises through permeable matter and dissipates into the atmosphere; however, new technology is being developed to minimise this, as biogenic methane has an impact on the global carbon ‘pool’ of supply. Most thermogenic methane rises to the surface to encounter geological formations. These formations are too impermeable for the gas to escape as they are sedimentary basins and they are prone to trapping a lot of natural gas. To access this natural gas, holes need to be drilled through the rock to allow the gas to escape and be harvested. You can contain this methane to harvest it into a potential energy source. These basins can be found worldwide, such as in the deserts of Saudi Arabia, Venezuela and the Arctic.

TYPES OF NATURAL GAS There are two main categories of natural gas: 'conventional' and 'unconventional' gas. Conventional gas is easily accessible and economically viable to extract. Unconventional gas, on the other hand, is found in places where it is not convenient nor practical to perform an extraction. Fortunately, technological advancements have made such extractions possible. Conventional gas extraction can be done using standard methods which are convenient and inexpensive as it does not require specialised extraction techniques and equipment. It is found in natural porous reservoirs that are blocked with an impermeable rock stratum which can be easily unblocked Deep natural gas is an unconventional gas located 15,000 ft below the Earth’s surface. This natural gas is trapped in layers under shale, an insoluble fine-grained sedimentary rock. Hydraulic fracturing or horizontal drilling can be carried out to extract the gas. Hydraulic fracturing involves splitting open a rock with a high-pressure stream of water then opening it with grains of sand, glass or silica, allowing the gas to flow freely out of the well. Horizontal drilling is when drilling parallels the Earth’s surface, allowing access to the gas trapped between the rocks. Another unconventional deposit of gas is Tight gas. Tight gas is found underground in impermeable rock formations, making it very hard to extract. It requires expensive and complicated methods of extraction, such as hydraulic fracturing and acidizing. Acidising is similar to hydraulic fracturing but acid instead of water is injected into natural gas deposits, dissolving the rock thus allowing the gas to escape.

90


Figure 1: Hydraulic Fracturing Diagram Coalbed methane is also found underground, near coal deposits. Historically, when coal was mined, natural gas was unintentionally freed out of the mines. However, it is now a popular method of collecting methane; it is complicated to mine, but they contain large amounts of natural gas. Another interesting unconventional source is gases from geopressurized zones formed 10,000-25,000 ft below the Earth’s surface. Layers of gas form on top of a porous material such as sand. Furthermore, a newfound source of natural gas, which is found in ocean sediments and permafrost areas of the Arctic, are methane hydrates. Much like hydraulic fracturing, this process too requires high pressure and low temperatures. In ocean sediments, methane hydrates form on the continental slope as the bacteria on them allows other microorganisms to sink into the ocean floor and decompose into the silt. In permafrost ecosystems, they form as bodies of water freeze and water molecules trap individual methane molecules. There is a lot of energy stored in them; however, they are fragile geological formations, meaning that they must be extracted with extreme care.

Figure 2: Conventional and unconventional gas formations in NSW 91


Natural gas is domestically-abundant and offers multiple environmental benefits over other energy sources such as coal. It is considered to be one of the cleanest fossil fuels as it emits relatively fewer harmful chemicals into the atmospheres. It can potentially mitigate some of the environmental issues such as greenhouse gas emissions, smog, air quality and acid rain.

ENVIRONMENTAL IMPACT OF GREENHOUSE GASES Greenhouse gases are those that absorb and emit infrared radiation in the wavelength range emitted by Earth. They include water vapour, carbon dioxide, methane, nitrogen oxides (NOx) and engineered chemicals such as chlorofluorocarbons (CFCs). The most damaging greenhouse gas is carbon dioxide, and it is currently at its highest level recorded.

Figure 3: Carbon dioxide emissions by each country These gases can trap heat in our atmosphere (known as the greenhouse effect), helping maintain a habitable climate for us; however, they are now imbalanced, and threaten life on earth. Greenhouse gases are linked to climate change which has resulted in rising temperatures, extreme weather conditions, rising sea levels, a decrease in wildlife populations and the destruction of habitats. Furthermore, greenhouse gases can cause respiratory diseases due to smog and air pollution. Smog is formed as a result of a chemical reaction between carbon monoxide, nitrogen oxides and heat from the sunlight, and alongside poor air quality, it can lead to respiratory problems, both temporary and permanent. However, natural gas does not contribute to the formation of smog as it does not release copious amounts of nitrogen oxides. Thus, natural gas can effectively reduce smog and increase air quality. Acid rain is another major problem that is a result of greenhouse gases. It damages crops, forests and wildlife habitat. Much like smog, it can also cause respiratory problems. Acid rain is a result of the formation of multiple acidic compounds from sulfur dioxides and nitrogen oxides reacting with water vapour and other chemicals in the presence of heat from the sun. Natural gas emits virtually no sulfur dioxide and 80% fewer nitrogen oxides than the combustion of coals.

92


ADVANTAGES OF NATURAL GAS Using natural gas to generate electricity has significantly more advantages than other means of energy. Firstly, it reduces greenhouse gas emissions when generating electricity as it produces minute amounts of NOx, CO2 and other particulate emissions. Additionally, it emits virtually no SO2. It can be used as a replacement for fossil fuels such as coal and oil, as these produce more harmful toxins. Coal powered plants and industrial boilers reduce SO2 emission through the use of scrubbers, which produces 'sludge', a semi-solid waste product, while electricity generation using natural gas produces virtually no SO2, reducing the production of sludge and eliminating the need for scrubbers. Furthermore, natural gas can be reburned and injected into coal or oil-fired boilers. This reduces NOx emissions by 50-70% and SO2 emissions by 20-25%. Additionally, natural gas can combine cycle generations which means that units that usually capture and generate electricity, resulting in wasted heat energy, can now be reused to regenerate electricity. These multiple benefits include aspects such as increased energy efficiency, less fuel usage and fewer emissions. Natural gas-fired combined-cycle generation units can be up to 60% efficient, whereas coal and oil generation units are only around 30-35% efficient. Moreover, natural gas can be used for fuel cells. This application is currently in development for widespread use in the future. The idea is that fuel cells will use hydrogen to generate electricity, which can be obtained from natural gas in abundance. This would theoretically lead to fewer emissions from the generation of electricity.

Figure 4: Hydrogen fuel cell diagram According to the Paris Agreement, the goal for each country is to restrict global warming levels to below 2°C. This agreement plays a critical role in addressing air quality problems and reducing carbon dioxide emissions globally due to the legal guidelines it puts in place. Presently, 197 countries-all nations- have signed this treaty. This includes countries such as the United States of America, India and China. Each country is responsible for reducing emissions and submitting national climate action plans. The governments agree to meet every five years to assess their progress and evaluate long-term goals. Under the EU’s 2030 climate and energy framework, countries should collectively aim to reduce greenhouse gas emissions by at least 40% compared to 1990.

93


A growing number of countries are transitioning to use renewable energy and limiting their carbon footprint with an emphasis on the coal-to-gas transition. From 2010 to 2019, more than 550 coal-fired power plants were suspended. In 2016, natural gas generators had replaced coal as the primary suppliers of electricity in the United States of America. This has resulted in a decrease in the CO2 emissions, which fell by 28% from 2005. One of the most popular policy solutions to climate change is decarbonisation i.e. shifting towards lower carbon power sources. China has been responsible for over 65% of the worlds energy consumption and over 70% of carbon emissions but the industrial energy consumption has greatly reduced in the past years, which is the most significant contributor to China’s energy and CO2 emissions. The industry has been implementing energy-efficient and low-carbon development. The decline in energy consumption in 2017 was 4.6% in China.

Figure 5: Sector energy savings in China India is the third largest-emitter of carbon emissions after China and the United States of America. It is projected to take over the USA by 2030 at its current rate of carbon emissions. India’s CO2 emissions had increased by 132% whereas other countries increased by an average 40%. India heavily relies on coal for electricity and this demand for energy is anticipated to increase. Current government policies modelled by the International Energy Agency predict that this energy consumption will double in the future. It is projected that by 2040 the transport sector carbon emissions will triple and other industries such as construction are also projected to increase in carbon emissions.

Figure 6: India’s energy consumption by fuel and sector 94


CONCLUSION Major countries such as China and the United States of America are only a few of the countries that have made some progress in reducing greenhouse emissions. Nevertheless, stronger actions and policies need to be set in place to allow for a more effective transition. India is predicted to be the largest carbon emitter by 2040 because it is still developing, the country has not yet made major efforts to reduce carbon emissions. Monopolies, businesses and developing countries are only a few hindrances that can affect the decarbonisation goals set by the Paris Agreement. Switching to natural gas will aid countries in meeting these targets, and more collaboration amongst governments and industry leaders will allow for a more successful transition. This pathway to decarbonisation will lead to a more sustainable future.

BIBLIOGRAPHY Aton, Adam. “Democratic Candidates Agree on Climate Change, Except for Role of Natural Gas.” Scientific American, Scientific American, 15 Jan. 2020, www.scientificamerican.com/article/ democratic-candidates-agree-on-climate-change-except-for-role-of-natural-gas/ Chen, Ji, et al. “China's Industry: Deep Decarbonisation Progress and Challenges.” Energy Post, 4 June 2019, energypost.eu/chinas-industry-deep-decarbonisation-progress-and-challenges/ “Environmental Impacts of Natural Gas.” Union of Concerned Scientists, www.ucsusa.org/resources/ environmental-impacts-natural-gas Goel, Khushboo. “Deep-Decarbonization Strategy for India.” Kleinman Center for Energy Policy, 18 Dec. 2019, kleinmanenergy.upenn.edu/policy-digests/deep-decarbonization-strategy-india Iea. “The Role of Gas in Today's Energy Transitions – Analysis.” IEA, www.iea.org/reports/the-roleof-gas-in-todays-energy-transitions Kusnetz, Nicholas, et al. “Natural Gas Rush Drives a Global Rise in Fossil Fuel Emissions.” InsideClimate News, 27 Feb. 2020, insideclimatenews.org/news/03122019/fossil-fuel-emissions-2019natural-gas-bridge-oil-coal-climate-change Mivielle, Julien. “Natural Gas, False Hope in Climate Change Campaign?” Phys.org, Phys.org, 22 Nov. 2019, phys.org/news/2019-11-natural-gas-false-climate-campaign.html National Geographic Society. “Natural Gas.” National Geographic Society, 9 Oct. 2012, www. nationalgeographic.org/encyclopedia/natural-gas/ “NaturalGas.org.” NaturalGasorg, naturalgas.org/environment/naturalgas/ Nunez, Christina. “Carbon Dioxide Levels Are at a Record High. Here's What You Need to Know.” Carbon Dioxide in the Atmosphere Is at a Record High. Here's What You Need to Know., 14 May 2019, www.nationalgeographic.com/environment/global-warming/greenhouse-gases/ “SIPA Center on Global Energy Policy.” Columbia, 24 Sept. 2019, energypolicy.columbia.edu/ research/commentary/role-natural-gas-energy-transition “U.S. Energy Information Administration - EIA - Independent Statistics and Analysis.” Natural Gas Explained - U.S. Energy Information Administration (EIA), www.eia.gov/energyexplained/natural-gas/

95


96

'NATURAL GAS' Jasmine Hui (Year 12, Wu)


Effects of Climate Change on Plant Growth Dylan Sharma (Year 10, Churchill)

This article will present scientific evidence for the way plants adapt to climate change and research on the ways plants develop in order to sustain themselves whilst experiencing climate change.

EFFECTS ON PLANTS Climate change is having a significant effect on the growth of plants. The number of days per year in which a plant has the right conditions to grow within the next 80 years could decrease by as much as 11% due to climate change [1]. This would highly affect food production, as well as many farmers, and low paid workers who depend on the crops for food. The main cause of this decrease would be the large increase in CO2 levels due to the increase in temperature, as the high temperatures increase the amount of water vapour which amplifies the greenhouse gases. The Earth’s average temperature has increased by 0.8 degrees in the last 200 years [4], but has only started rapidly increasing in recent times due to new technology and infrastructure which increases greenhouse gas emissions. This is due to natural combustion processes such as respiration and decomposition, and man-made processes such as burning of fossil fuels and deforestation. This leads to less photosynthesis which takes in carbon dioxide from the atmosphere to change it into oxygen, resulting in more carbon dioxide remaining in the atmosphere. Research has shown1 that if the temperature of the earth’s atmosphere increases, then the number of freezing days will also decrease. Freezing days are extremely crucial for the plant as they are normally the days in which plants don’t grow due to lack of heat. This is because a plant needs an optimum temperature in order for it’s enzymes to work; if it is too cold the enzymes won’t function well and if it is too hot the enzymes would denature, meaning the enzymes have to be at a perfect temperature in order to function well. If freezing days decrease by 7% then there will be a positive increase of growth in the plants [1]; however, too much increase in temperature would mean the water availability for the plant would also decrease. This is because there would be more energy to transfer water into water vapour, which means there would be water on the ground less consistently available. As water is one of the key reactants for photosynthesis it is extremely important for providing energy which helps the growth of the plant. Overall, the temperature increasing or decreasing is bad for plant growth due to the enzymes not working efficiently; however, too high an increase in temperature could also lead to lack of water supply due to evaporation which is greater than the effect of denaturing of the enzymes due to too high temperatures. Another effect of climate change is the effect on forests throughout the world. Temperature increase reduces plant growth, as the water supply will be minimal and the enzymes may denature, which is seen through deserts such as the Sahara where the only plants living there are cacti due to their adaptations to the decreased in water supply. Forests are an essential provider of food and habitat to thousands of species of animals. These animals depend completely on the forest for food and a home. Increased temperatures would mean that the number of stable trees would decrease substantially and many habitats will be lost resulting in a decreasing population of animals and birds essential to the food chain and our ecosystem. In order for plants to react to climate change, they have to adapt to the change in CO2 levels in the atmosphere. However, research showed some plants which are older find it harder to adapt to climate change and therefore are highly affected by the increase in temperature levels, leading to an increase in CO2 levels [1]. This is because older plants are a lot slower growing and flowering than younger plants as they respire and carry out photosynthesis1 at a slower rate. As well as this, the older plants need more water to survive and as there has been a decrease in water due to the rise in temperature the older plants struggle to adapt1. Essentially, this means the older the plant, the more difficult it is to adapt and therefore much more likely to die from climate change.

97


Figure 1: The effect of climate change on crops (Source: ScienceDaily)

ADAPTATIONS Plants can adapt to an increase in CO2 levels by adapting parts of the leaves respiratory system to stop CO2 from entering. Studies show that when a plants in areas of high carbon dioxide concentration, have fewer stomata [2]. The stomata is a tiny opening located on the lower epidermis of the leaf to allow gas exchange. This technique is used to control the carbon dioxide used for photosynthesis. As photosynthesis is extremely crucial to the plant, the stomata in the leaf play an extremely important job. The stomata’s main purpose is for gas exchange, and therefore needs to control gas going in and out of the leaf so the amount of CO2. The stomata also need to make sure the carbon dioxide, which is highly concentrated, isn’t all used up at once and isn’t overused in the plant. Stomata is also important for regulating water loss as it closes during the night to stop water from being released, which helps with maintaining water if there are high levels of CO2, to reduce the chance of it being a limiting factor. Plants have also adapted to the increase in heat due to the increase of CO2 in the atmosphere which stops constant water sources. Plants have an enzyme inside the plant called rubisco. Rubisco breaks down CO2 however when the temperature is high it breaks down O2 instead due to lack of energy [2]. This is because the water source is a reactant in photosynthesis and creates energy for the plant; therefore, if there is little water, there will be little energy. Plants have adapted to this by locating the rubisco inside the tissue where it is protected from the heat and the enzyme can break down CO2. In a study by a team of scientists [2], they put several different crops in a CO2 rich environment to see the effect. The high CO2 environment reduced the amount of zinc, iron and protein in the crops which are essential to the growth of the crop. This clearly showed the negative impact of CO2 on the crops and can significantly harm not only the crops but also animals eating the crops as they could lack the nutrition necessary to their health. This is because crops contain the minerals such as calcium, which is beneficial to bone strength as well as helping with blood clotting, and helps our heartbeat by due to its role in the contraction of the heart to increase the blood flow rate. Plants also contain magnesium which helps with regulating muscles and nerve functions. Seasonal plants can adapt quickly to climate change. This is because these plants are used to rapid changes in the environment due to seasonal change. A study was conducted by a scientist called Steven Franks in the Syrian desert to observe the effects a dry spell has on the way plants go about their flowering process3. The scientists dissected their seeds in order to observe the changes that had taken place in order for the crops to survive. The scientists found a rapid shift in the time of flowering, in which the plant had changed its flowering time by over a week to when there was a short wet spell in order to sustain a healthy flowering; as the plant originated from the California marsh and was used to a constant supply of water this was particularly impressive and emphasises the extreme adaptations plants have to take to survive climate change. This meant the plant would have grown effectively with enough water supply to gain enough energy to sustain itself as well as using it’s potassium, phosphorus and nitrogen (the three essentials to flowering) efficiently. 98


THE IMPACT OF COVID-19 Due to the recent COVID-19 outbreak, everyone around the world has been forced into quarantine in order to stop the spread of this deadly virus; however, COVID-19 has had a positive impact on the environment. The worldwide quarantining has led to a reduction in carbon dioxide due to fewer factories and workplaces being active, meaning less workplace-related combustion is taking place which would have released huge amounts of carbon dioxide. This carbon dioxide would have affected the plants as plants won’t be able to respire due to the loss of oxygen. Furthermore, increased amounts of carbon dioxide leads to a reduction in zinc, iron and protein which are extremely important to the growth and development of the plant. Therefore, ironically, this COVID-19 outbreak is extremely helpful to plant growth and health.

CONCLUSION In conclusion, plants are affected heavily by climate change by temperature increase and an increase in carbon dioxide emissions. Also, plants have very effective ways of adapting to climate change such as: reducing open stomata and changing flowering time to help with the growth of the plant. These techniques of adaptation show that climate change affects the plants; however, they can survive through adaptation.

BIBLIOGRAPHY 1. https://time.com/3916200/climate-change-plant-growth/ Worland, Justin. “The Weird Effect Climate Change Will Have On Plant Growth”. June 11 2015. 2. https://medium.com/@thunsarzynski/plant-adaptations-under-a-rising-co2-level-feat-climate-change8f5accd1f213 Sarzynski, Thuận. “Plant adaptation under a rising CO2 level (feat Climate Change)”, Mar 2 2019. 3. https://www.scientificamerican.com/article/many-plants-can-adapt-whe/ Biello, David. “Many Plants Can Adapt when Climate Goes against the Grain”. January 9 2007. 4. https://www.theworldcounts.com/stories/Temperature-Change-Over-the-Last-100-Years Larsen Esben “Is it getting too hot, or what?” 2020 5. https://www.oneyoungworld.com/blog/what-effect-covid-19-climate-change Pareja Jauregui “Is it getting too hot, or what?” 2020

99


Cut And Paste Genes Callum Sharma (Year 11, Churchill)

This article will present methods used by scientists to find cures for diseases through genetic editing using CRISPR. CRISPR technology is a simple yet powerful tool for editing genomes. It allows researchers to easily alter DNA sequences and modify gene function. Its many potential applications include correcting genetic defects, treating and preventing the spread of diseases and improving crops. However, its promise also raises ethical concerns.

1 BACKGROUND AND PROCEDURE The genomes of organisms encode a series of messages and instructions within their DNA sequences. Genome editing involves changing those sequences, thereby changing the messages. CRISPR is a genetic engineering technique that targets genetic codes in order to edit the DNA. For example, it can be used to treat diseases such as HIV. When the target DNA is found, Cas9 – one of the enzymes that is part of the CRISPR system – binds to the DNA and cuts it, shutting the targeted gene off. The protein Cas9 is an enzyme that acts like a pair of molecular scissors, capable of cutting strands of DNA.1 CRISPR stands for Clustered Regularly Interspaced Short Palindromic Repeats This means a series of ‘short’ ‘clustered’ repeating DNA sequences with “spacers” sitting in between them, and which mirror the next sequence like a ‘palindrome.’ These are the sequences that are used as a guide by the Cas-9 enzyme. CRISPR technology was originally developed from the natural defence mechanisms of bacteria and archaea. These organisms use CRISPR with RNA and various Cas proteins, including Cas9, to stop attacks by viruses and other foreign bodies. 8 The Cas9 enzyme involves two RNA molecules which move the protein Cas9 to the targeted site, where it will make its cut, cutting both strands of the DNA double helix. In bacteria, once the Cas9 has been guided to the CRISPR sequence, it can insert ‘spacers’. In the case of bacteria, the spacers are taken from viruses that previously attacked the organism. They serve as a bank of memories, which enables bacteria to recognize the viruses and fight off future attacks [1]. Once DNA is cut, the cell's natural repair mechanisms insert changes to the genome. This can happen in two different ways. One way is to join the two cuts back together. This method, also known as "non-homologous end joining," can cause flaws. Nucleotides can accidentally be inserted creating mutations, which could affect a gene. The second method is that the break is fixed by filling in the gap with nucleotides. To create this, the cell makes a short strand of DNA as a template. Scientists can supply the DNA template of their choosing to fix a mutation or to change a gene [8].

2 SCIENTIFIC HISTORY The discovery of clustered DNA repeats was made by Yoshizumi Ashino in Osaka University in 1987 [1]. He accidentally discovered them by cloning a CRISPR sequence along with another gene that was the original target. The repeats were an unexpected finding, because repeats would usually be consecutive rather than in this case clustered. The CRISPR CAS-9 system was discovered by Jennifer Doudna, a professor at the University of California at Berkeley, and Emmanuel Charpentier. They discovered the CRISPR to create the CAS-9; which could locate and target the DNA specified by the guide RNA. Fusing two RNA molecules would create a single guide RNA molecule. They manipulated the nucleotide sequence of the RNA to program the CAS-9 [2]. In 2013, Feng Zhang, of the Broad Institute of MIT and Harvard, was the first scientist to adapt the CRISPR CAS-9 for editing eukaryotic cells. Zhang’s lab focused on Synthetic Biology and he had a central role in the development of CRISPR technologies. In 2011, Doctor Zhang started using the CRISPR system 100


Figure 1: CRISPR-CAS9 Gene Editing (Source: https://www.globalbiotechinsights.com/articles/10802/lookingbeyond-the-debate-of-who-owns-crispr-gene-editing-technology)

on human cells, based on prior studies done by the Sylvain Moineau Lab. Zhang's group, using Doudna andCharpentier’s work, could effectively program the Cas9 to function in a human body. Zhang showed it could be used to target different locations in the genes at the same time. Moreover, he showed that CRISPR CAS-9 could be used for fixing issues in blood such as blood clots or the opposite, hemophilia [6]. In November 2018, a Chinese researcher He Jiankui altered embryos during fertility treatments and managed to delete receptors on the white blood cells of twin sisters using CRISPR CAS-9. By doing this he enhanced the resistance of the cells to HIV. The Chinese scientist claimed that he had altered a gene called CCR5, which allows the AIDS-causing virus to infect an important class of cells in the human immune system. He Jiankui told The Associated Press that he carried out his experiment to protect the twin sisters from HIV infection later in life, as their father was HIV-positive [3].

3 ETHICS A big safety issue with He Jian Kui’s experiments is their potential additional effects, as by using CRISPR, he might cause unintended, harmful mutations elsewhere in the genome. To ensure this had not happened, his team sequenced the entire genomes of both parents. They then removed 3 to 5 cells from each of the edited embryos before implantation in the mother and fully sequenced them, to check for unwanted mutations. In addition, a caution of using CRISPR on embryos is something called mosaicism. If the eggs started dividing before the gene editing took place, the twin girls might have a mixture of cells with and without the edit [5]. Tests on the placenta and umbilical cord blood and tissue found exactly the same mutations in each sample for both twins, showing that mosaicism had not occurred. Mosaicism is an issue because if the offspring’s immune cells developed from non-edited cells, they would still be vulnerable to HIV [5]. 101


Figure 2: He Jian Kui defending his stance on Genetic Editing

(Source: https://www.theguardian.com/science/2018/nov/28/scientist-in-china-defends-human-embryo-gene-editing)

A third issue is that disabling the CCR5 gene does not provide complete protection against HIV and the broader consequences of knocking out this gene – which is involved in immune function – are unclear. The twins are not completely vulnerable but rather not completely protected. He Jian Kui was criticised for experimenting when the risks to otherwise healthy children were unclear, and for acting against Chinese law. There was also anger because HIV can be treated and there was barely any risk of it being passed from the HIV-positive father to his children. The Chinese authorities investigated and concluded that Professor He had acted illegally in pursuit of fame and fortune. Professor He has always defended his experiments and at a summit in Hong Kong said he was "proud" of his gene-editing work. The Core Guiding Principles for Genome Editing in Human Embryos which were created by He Jian Kiu and he abided by during his experiment are as follows: • Mercy for the families who need the editing. A broken gene, infertility, or a preventable disease should not stop life. For a few families, early gene surgery may be the only way to heal disease and save a child from suffering. • Only to be used to cure serious diseases, not for vanity. Gene surgery is a serious medical procedure that should never be used for aesthetics. • To respect a child’s autonomy. After gene surgery, a child has equal rights. No obligations exist to his or her parents or any organization, including paying for the procedure. • Genes do not define you. Our DNA does not predetermine our purpose or what we could achieve. We flourish from our own hard work. Whatever our genes may be, we are equal in potential. • Everyone deserves freedom from genetic disease. Wealth should not determine health. Organizations developing genetic cures have a moral obligation to serve families of every background [4]. Some scientists accept the use of genome editing and tests and investigations on humans have been carried out in Europe, the US and Canada to see how this new technology could treat blood disorders like anemia. However, after He’s tests on the twin daughters, genome editing became quite a controversial subject that led to concern in the scientific community. Scientists say that changes to the embryo’s 102


genomes could be passed down generations and potentially cause disease or have other negative effects. After He’s experiment, Chinese scientists stated that "We as biomedical researchers strongly oppose and condemn any attempts on editing human embryo genes without scrutiny on ethics and safety!" Even the scientists involved in the discovery of CRISPR opposed He’s tests [9]. "This work is a break from the cautious and transparent approach of the global scientific community's application of Crispr-Cas9 for human germline editing," stated Jennifer Doudna,. Her concern was that people might exploit genome editing to create ‘designer babies’, where the parents would choose its traits or characteristics such as blond hair [9]. In conclusion, genetic editing can benefit the human race substantially. However, it also has its downsides as it can result in mutations or even infant mortality. I believe He Jian Kui’s experiment was ethical and helpful for other genetic scientists. I believe this because it was to stop the daughters having to suffer later in life, which I think should be the only reason it should be used, to try to prevent illness or unnecessary death. This should be even more pertinent in the current Covid-19 situation as it spreads around the world. People with diabetes or a cardiovascular disease have a severe risk, but by using Genome Editing it could reduce the likelihood of the person getting it from birth.

GLOSSARY Archaea: Nucleotide: RNA: Genome: Spacer:

single-celled microorganisms, different from bacteria molecules which are joined together in long chains to make DNA and RNA ribonucleic acid; long, single-stranded chain of nucleotides that carry genetic information and are involved in processing proteins a genome is the genetic material of an organism; the genome sequence of an individual is the complete list of the nucleotides that make up all of their chromosomes short sequences of DNA that are interspersed among repeated sequences, and do not code for any genes

BIBLIOGRAPHY 1. Vidyasagar, Aparna. “What Is CRISPR?” LiveScience, Purch, 21 Apr. 2018, www.livescience. com/58790-crispr-explained.html. 2. Institute, Broad. “Questions and Answers about CRISPR.” Broad Institute, 4 Aug. 2018, www. broadinstitute.org/what-broad/areas-focus/project-spotlight/questions-and-answers-about-crispr. 3. NormileNov, Dennis, et al. “CRISPR Bombshell: Chinese Researcher Claims to Have Created GeneEdited Twins.” Science, 27 Nov. 2018, www.sciencemag.org/news/2018/11/crispr-bombshell-chineseresearcher-claims-have-created-gene-edited-twins. 4. http://www.youtube.com/watch?v=MyNHpMoPkIg 5. Page, Michael Le. “CRISPR Babies: More Details on the Experiment That Shocked the World.” New Scientist, 28 Nov. 2018, www.newscientist.com/article/2186911-crispr-babies-more-details-on-theexperiment-that-shocked-the-world/. 6. Institute, Broad. “CRISPR Timeline.” Broad Institute, 7 Dec. 2018, www.broadinstitute.org/whatbroad/areas-focus/project-spotlight/crispr-timeline. 7. Ramsey, Lydia. “A Scientist Who Genetically Edited Babies to Be HIV-Resistant Was Just Sentenced to 3 Years in Prison. Here's How He Did It and Why Scientists around the World Are Outraged.” Business Insider, Business Insider, 30 Dec. 2019, www.businessinsider.com/he-jiankui-sentenced-to3-years-in-prison-for-gene-editing-embryos-2019-12. 103


Could Stem Cells be the Next Breakthrough in Medicine? Iris Cheung (Year 12, Gellhorn)

1 INTRODUCTION Stem cell* therapy has become increasingly advanced and reliable in the fields of scientific research and regenerative medicine in recent years. Growing at a rate of 36% per year the market will expand even more rapidly when a breakthrough treatment for non-communicable diseases or lifestyle factors occurs [1]. Through promoting the repair of dysfunctional, injured or diseased cells and cell rejuvenation, could this cutting-edge therapy serve to be a turning point and game changer in modern medicine, providing hope for untreatable diseases and creating other breakthroughs? 2 WHAT ARE STEM CELLS? As the name may suggest stem cells are “stems” / sources from which new, different cells can be made. They are undifferentiated cells which can keep dividing to give rise to other cell types in a process known as “specialization”. This is what allows it to be useful in regenerative medicine. There are four types of stem cells: totipotent, pluripotent, multipotent and unipotent stem cells. Sources of stem cells include embryonic stem cells, adult stem cells and umbilical cord stem cells. All humans start out as one cell - a zygote (or a fertilized egg cell). The zygote divides by mitosis to form a blastocyst. Eventually, the cells begin to differentiate, taking on one particular function in a part of the body (differentiation). With the correct stimulus given to the unspecialized cells, some genes can be switched on and become active. Messenger RNA (mRNA) is then made from those active genes only and moves out to the ribosomes where it is read. Appropriate proteins are then made, performing a specific function [2]. Embryonic stem cells are derived from blastocysts which is a stage of pre-implantation embryos that have an inner cell mass. After that, these cells are placed in a culture dish filled with culture medium. These cells are pluripotent because in the end, they are able to differentiate into every cell type in the organism. However, one of the problems with embryonic stem cells is the ethical restrictions related to their use in medical therapies [3]. Somatic or adult stem cells are undifferentiated cells that are found among differentiated cells in the whole body after development. These stem cells enable the healing, growth, and replacement of cells that are lost each day; however they have a restricted range of differentiation options. Among the many, there are the following types:

*Definitions of key words in bold can be found in the glossary at the end of this article

104


• Mesenchymal stem cells, which are present in many tissues. In bone marrow, these cells differentiate mainly into bone, cartilage, and fat cells. As stem cells, they are an exception because they exhibit pluripotent properties and can specialize in the cells of any germ layer. • Neural cells eventually form nerve cells as well as their supporting cells: oligodendrocytes and astrocytes. • Haematopoietic stem cells form all kinds of blood cells including: red, white, and platelets. • Skin stem cells form, for example, keratinocytes that form a protective layer of skin.[4]

Figure 1: The process of formation of embryonic stem cells [5] 3 A BREAKTHROUGH IN STEM CELL THERAPY – iPS cells The astonishing turning point in stem cell therapy came in 2006, when the scientists Shinya Yamanaka and Kazutoshi Takahashi discovered the possibility to reprogram multipotent adult stem cells to the pluripotent cells using genetic engineering techniques in adult mouse cells. When this proved to be successful, the procedure was then repeated on humans, where genetically modified harmless viruses were used to provide four genes to carry specific transcription factors into the skin cells and synovial fluid of two individuals. This process produced stem cells without using or harming an embryo; the resulting induced pluripotent stem cells (iPS cells) were able to renew themselves, and there was no risk of rejection if cells from an individual were used to produce their own stem cells [6]. This created many new opportunities for the use of stem cells in medicine as it overcame the 105


ethical concerns regarding the usage of embryonic tissues as a source of stem cells, although there are still underlying problems such as the risk of iPS cells becoming cancerous [7]; however, overall, the differentiation abilities and “ethical-concerns-free” characteristics of iPS are attractive for present and future science research. 4 STEM CELL USE IN MEDICINE Many serious medical conditions are a result of the improper differentiation of cells or improper cell division. Currently, stem cell therapy has been used to treat medical conditions such as those where there is a loss, shortage or reduced functioning of certain cell types and even incurable diseases; examples include: Parkinson’s disease, Multiple Sclerosis, Type 1 diabetes, and age-related macular degeneration [8]. Additionally, stem cell research could aid our understanding of stem cell physiology [9]. These benefits provide many new opportunities for discoveries in treating incurable diseases, serving as a big leap forward in medicine. 4.1 Therapy for Incurable Neurodegenerative Diseases and Damaged Nerves Although there are no medical cures for neurodegenerative diseases, scientists have found potential in treating, or even curing neurodegenerative diseases through the use of stem cells during experiments with mouse stem cells. They managed to form dopamine neurons from mouse stem cells; Parkinson’s diseases (a neurodegenerative disease) [10], for example, could potentially be treated/cured with stem cell therapy in the near future using similar techniques. There are also no medical cures for damaged and destroyed nervous tissues in the brain and spine as of yet, as these nerves do not usually regrow. However, trials were carried out in which neural stem cells were injected into the hippocampal area of the brain of mice and rats with damaged spines. Results have shown that stem cells have successfully grown into working adult nerve cells, where damaged spinal cords have also partly rejoined [11]. This showed the possibility for human treatments using the same technique. 4.2 Type 1 Diabetes Type 1 diabetes is an immunodeficiency disease in which the glucose-sensitive, insulinsecreting cells from islets of Langerhans in the pancreas are destroyed and they stop making insulin [12]. Rather than receiving regular treatments such as insulin injections, stem cell therapy may allow the pancreas cells to function properly again, restoring insulin production, and thereby controlling blood glucose levels. Once again, to achieve this, researchers are currently trialling the cells in animals with diabetes to observe the outcome, and if successful, similar techniques can possibly be performed on humans. 4.3 Stem Cells in Phamacological Testing Stem cells can also be useful in drug testing of new drugs before they enter the market, making sure they are effective enough and safe. This can be done by testing on specific differentiated cells from pluripotent cells and monitoring for undesirable effects. If undesirable effects appear, the drug formulas can be altered until they reach a sufficient 106


level of effectiveness and safety. In this way, drugs can enter the market without human trials, as trialling on humans can sometimes be risky and unethical [13]. 4.4 Stem Cells - a Possible Alternative to Arthroplasty Arthroplasty is a surgical procedure to restore the function of a joint. Osteoarthritis (OA) is the most common chronic joint condition where two bones come together in a joint. The ends of these bones are covered with cartilage but in OA, this cartilage breaks down, which causes the bones within the joint to rub together [14]. Treatment for OA is often arthroplasty. However, as an alternative, stem cell therapy can help treat osteoarthritis or stop the onset of osteoarthritis. This is done by collecting adult multipotent stem cells from fat or bone marrow. Multipotent stem cells are then able to be turned into cartilage, bone, muscle, tendon, ligaments, or fat, depending on the type of tissue that surrounds them [15]. 4.5 Tissue Banks Tissue banks have become increasingly popular in recent years, particularly in obstetrics. It is known that the umbilical cord is very rich in mesenchymal stem cells, these cells differentiate mainly into the bone, cartilage, and fat cells. Mesenchymal stem cells are an exception because they act as pluripotent stem cells and can specialize in the cells of any germ layer. Additionally, a result of its cryopreservation right after birth, its stem cells can be successfully stored, and then later used in therapies that prevent the future lifethreatening diseases of a patient [16]. 5 OBSTACLES IN THE FUTURE Although stem cell therapy has thrived in the last decade, it is not yet fully mature, and there are still challenges and areas of concern that need to be overcome in the future. Some of these include: 1. More work needs to be done in order to fully understand the mechanism by which stem cells function during animal models or trials, making sure no mistakes are made in the process when repeated on humans. 2. The efficiency of stem cell-directed differentiation must be improved so that stem cells are more reliable and trustworthy for a regular patient. Transplanting functional, new organs made by stem cell therapy would require the creation of millions of working and biologically accurate cooperating cells - hence efficiency and accuracy is needed. 3. Immunological rejection is another major barrier. The immune system may recognize transplanted cells as foreign bodies, triggering an immune reaction resulting in transplant or cell rejection. This is something that needs to be overcome moving forwards [17]. 4. One of the most obvious concerns with stem cell therapy is the ethical issues regarding the use of embryos. There are fewer ethical issues associated with adult stem cells and induced pluripotent stem cells.

107


6 CONCLUSION Though there are still obstacles to overcome, through many decades of experiments, the potentials of stem cells are unquestionable. The field is making immense advances each day and the influence of stem cells in regenerative medicine is truly incredible. Currently, untreatable neurodegenerative diseases have the possibility of becoming treatable with stem cell therapy, developments are being made in therapeutic cloning involving stem cells, and there is potential in induced pluripotent stem cells, cell rejuvenation and haematopoietic transplantation. With stem cell therapy and all its regenerative benefits, there is greater potential than ever in history to better, and prolong human life. I believe stem cells really could be the next breakthrough in medicine.

Figure 2: Stem cell therapy research [18] GLOSSARY Stem cell: Stem cells are human cells that start off the same and are able to develop into many different cell types [19]. Totipotent: can give rise to any cell type found in an embryo as well as extra-embryonic cells (placenta) [20]. Pluripotent: can give rise to all cell types of the body (but not the placenta); capable of differentiation into any cell within the body, hence able to give rise to cells from any of the three major tissue lineages: ectoderm, mesoderm, and endoderm [21]. 108


Multipotent: can develop into a limited number of cell types in a particular lineage [22]. Unipotent: characterized by the narrowest differentiation capabilities and a special property of dividing repeatedly These cells are only able to form one cell type [23]. Adult stem cells (somatic stem cells): undifferentiated cells found among the normal differentiated cells in a tissue or organ that can differentiate when needed to produce any one of the major cell types found in that particular tissue or organ. Embryonic stem cells: the undifferentiated cells of the early human embryo with the potential to develop into many different types of specialized cells [24]. Blastocyst: an early embryo consisting of a hollow ball of cells with an inner cell mass of pluripotent cells that will eventually form a new organism. BIBLIOGRAPHY [1] https://www.weforum.org/agenda/2020/01/how-will-stem-cells-impact-the-futureof-medicine/ [2] https://www.healthline.com/health/stem-cell-research [3,4] https://stemcells.nih.gov/glossary.htm [5] https://www.pinterest.com/pin/685110162046564907/ [6] https://stemcellres.biomedcentral.com/articles/10.1186/scrt37 [7] https://www.technologynetworks.com/cell-science/articles/cell-potency-totipotentvs-pluripotent-vs-multipotent-stem-cells-303218 [8] https://stemcellres.biomedcentral.com/articles/10.1186/s13287-019-1165-5 [9] https://dev.biologists.org/content/140/12/2457 [10]https://www.hopkinsmedicine.org/stem_cell_research/safety_ethics/are_induced_ pluripotent_stem_cells_safe_yet.html [11,12] https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4264671/ [13] https://www.medicalnewstoday.com/articles/324472 [14]https://www.omicsonline.org/open-access/stem-cell-approaches-for-treatment-ofneurodegenerative-diseases-2167-065X.1000126.php?aid=32555 [15] https://kidshealth.org/en/parents/other-diseases.html [16]https://www.labiotech.eu/in-depth/animal-testing-stem-cells/ [17] https://www.healthline.com/health/osteoarthritis [18] https://www.pinterest.com/pin/685110162046564907/ [18] https://myhealth.alberta.ca/Alberta/Pages/stem-cell-treatment-for-osteoarthritis. aspx [19] https://www.mazecordblood.com/why-bank-cord-blood/what-is-cord-tissue/ [20,21] https://stemcellres.biomedcentral.com/articles/10.1186/s13287-019-1165-5#refCR107 [22] https://www.technologynetworks.com/cell-science/articles/cell-potency-totipotentvs-pluripotent-vs-multipotent-stem-cells-303218 [23] https://stemcellres.biomedcentral.com/articles/10.1186/s13287-019-1165-5 109


'ROBOTICS & AI'

Callum Begbie (Year 6, Darwin) 110 110


Robotics and Artificial Intelligence: How Far Can and Should AI Take Us? Jett Li (Year 12, Peel)

1 INTRODUCTION In recent times, arguably the most exciting (as well as controversial) field of study is that of Artificial Intelligence (AI) and Robotics. As our world becomes increasingly dependent on machinery and automation as solutions to many of the issues we face, researchers all across the world have begun to search for more ways to integrate AI into our world. The result of this research and experimentation is apparent. Breakthroughs are being made everywhere, from medicine to economics. We’ve all seen and heard of the potential machines have in the medical field, how they’re unmatched in efficiency and effectiveness and could possibly replace surgeons in the near future, as reported by Katrina Tse in the Scientific Harrovian (Issue IV, 2019). Similarly, many have seen the capabilities of machine learning in finance, especially the ability AI has to create perfect stock profiles and predict its growth to a reasonable degree of accuracy. The results we have seen have been staggering, and the only way is up. Even now, scientists are finding more ways to use AI in our daily lives, which begs the questions: How did we get to this point? Where do we go from here? Where do we draw the line? This article will explore these questions in depth and go into detail about the history of AI, what kind of breakthroughs are being made right now, and the ethical and moral issues that accompany the development of AI.

2 HISTORY AND DEVELOPMENT OF ARTIFICIAL INTELLIGENCE 2.1 BEGINNINGS OF RESEARCH The development of Artificial Intelligence began in the early 20th Century. Spurred on by fictional pieces of work that depicted fantastical forms of artificial beings and their uses, including the Wizard of Oz (1900), Metropolis (Figure 1), and even Frankenstein (1818), researchers began to explore different ways to create a sentient non-human being. This field of study truly began to take off in the 1950s, when an entire generation of scientists, philosophers and thinkers were exposed to widespread media coverage (both fictitious and real) on the possibilities of Artificial Intelligence.

Figure 1: A scene from the 1927 German expressionist sci-fi film, Metropolis (Source: lexpress.fr) 111


At this point in time, Alan Turing, the ‘father of AI and theoretical computer science’, theorised that machines could use reasoning and logic to solve problems much like humans do. He stated in his 1950 paper ‘Computing Machinery and Intelligence’ that if this were accomplished, it would be possible to build ‘intelligent’ machines and also test their level of intelligence relative to each other and to humans. However, extensive research based on this idea was not immediately purused, as computers were extremely expensive at the time and unobtainable for all but the most prestigious research facilities and universities [1]. However, five years later, a proof of concept - evidence that this theory was feasible - was created by researchers Allen Newell, Cliff Shaw, and Herbert Simon (Figure 2), through what is thought to be the first Artificial Intelligence programme in the world: Logic Theorist. This programme was created to mimic the problem-solving capabilities of a human being and was first presented to the world at the Dartmouth Summer Research Project on Artificial Intelligence (DSRPAI). Despite not making major scientific waves immediately, event attendees unanimously agreed that AI was a viable field of study and that the creation of a true sentient artificial intelligence Figure 2: Herbert Simon (L) and would be possible in the future. On top of this, Logic Theorist would Allan Newell (R), two of the later be used in several other fields, most notably helping prove 38 of developers of Logic Theorist the 52 theorems in the renowned Principia Mathematica (Whitehead Source: https://www.computerhistory.org and Russell), and producing new and more elegant solutions for some, showing the viability and adaptability of AI. The Logic Theorist would then be the basis for the next two decades of AI-based research all across the globe. By 1974, breakthroughs in computer science had allowed researchers to create more intricate AI programmes such as Newell and Simon’s General Problem Solver (a follow-up programme to the Logic Theorist) and Joseph Weizenbaum’s ELIZA, one of the first computer programmes that was capable of understanding and interpreting human speech. Afterwards, government-funded projects such as the 1982 Fifth Generation Computer Project pushed the development of AI forwards, placing AI computing in the limelight and inspiring generations of programmers to help develop this technology further. By 1997, Deep Blue became the first AI programme to defeat a Grandmaster in Chess, acting as a benchmark for Artificial Intelligence development as machine had finally triumphed over humans in logical thinking and decision making - something unthinkable just a few decades prior. 2.2 TYPES OF ARTIFICIAL INTELLIGENCE Artificial Intelligence can be split into three smaller subsets: Artificial Narrow Intelligence (ANI), Artificial General Intelligence (AGI), and Artificial Superintelligence (ASI). ANI is the form of Artificial Intelligence that deals with specific functions, and is limited in scope. Many programmes we use today fall within this category as they are created to work in specific environments for specific purposes. AGI is the form of Artificial Intelligence that deals with several fields at once. Programmes like this include those that problem-solve and perform levels of abstract thinking. Lastly, ASI transcends human intelligence and performs on a higher level across all fields. This is ultimately what researchers are working towards [2]. For the majority of AI development, researchers have been creating differing forms of ANI, slowly closing in on the goal of creating an AGI programme on par with normal human intelligence. This is a goal that researchers are very close to completing, as it has been estimated that a fully functional AI on with this level of intelligence will be completed by 2020.

112


AI can also be classified based on whether it is ‘strong’ or ‘weak’. Weak artificial intelligence programmes are able to respond to inputs by identifying and matching a command to a task that is has already been programmed to do. For example, asking a smart home programme to turn on the lights works because the programme recognises the key terms ‘lights’ and ‘on’, thus allowing them to associate what is being said with a command to turn on the lights. However, this also means that weak AI programmes do not truly understand what they are being told and require a baseline programme to associate commands to in order to function. On the other hand, strong AI programmes use what is known as clustering and association to process the data they are fed. Unlike weak AI programmes, strong AI doesn’t need programmed responses to fit the inputs they receive, instead they behave more like a human brain and are capable of creating a suitable response by themselves. 2.3 THE SCENE TODAY Today, the most impactful subfield of AI research is that of ‘machine learning’, which is slowly being integrated into every aspect of our lives. Corporations like Amazon and Google, for example, are using machine learning and data processing AI programmes to sort through massive data sets to match users with the best advertisements and recommendations in order to keep us entertained. Others are trying to branch out into different fields of study with the aim of improving our quality of life. Very soon we could see AI in the medical field, performing surgeries and prescribing medicine in place of human doctors and pharmacists. AI may also be able to educate children or act as an alternative to psychiatrists or psychologists in the near future with their enhanced cognitive abilities and adaptability to situations. While this seems similar to what we have with human psychologists at first, the fundamental difference lies in the ability of AI programmes to run through data sets too large for humans to manage and therefore they can make better, more informed choices when dealing with individual cases of psychological issues [3]. Besides improving the quality of human life, AI is also on track to improve several facets of the entertainment industry. AI programmes are now becoming even more important in the personalisation of a user’s experience on an entertainment platform and are key components of targeted marketing and advertising campaigns run by Entertainment and Media (E&M) companies. Many corporations are even looking into the creation and implementation of machine learning programmes that can help develop and create advertisements and trailers for upcoming projects [4]. Other branches of practical AI research that may become usable in the near future include space exploration, automating audio post production and self driving vehicles, as described by Ayuka Kitaura in the Scientific Harrovian (Issue IV, 2019).

3 BREAKTHROUGHS AND NEW USES FOR ARTIFICIAL INTELLIGENCE 3.1 TREATING ASD WITH SOCIAL ROBOTS Since the turn of the century, the frequency of children being diagnosed with ASD (Autism Spectrum Disorder) has shot up, particularly in countries such as the United States. Today, around 1 in 59 children in the United States have been identified as having ASD, according to the Centres for Disease Control and Prevention (CDC), compared to 1 in 150 just 20 years ago. With this explosive growth in individual cases of ASD, the demand for treatment or forms of therapy has similarly increased substantially. If the ASD ‘epidemic’ was similar to any other form of pathogenic outbreak, dealing with the influx of cases would be costly and difficult to deal with at first, but still ultimately manageable. However 113


unlike treating diseases or physical ailments, treating a developmental disorder such as ASD is far more complex and therefore more difficult due to the nuances in each and every case. This means that treating the symptoms of this disorder, which include lack of body language, not understanding social cues, not recognising or giving eye contact, and lacking variance in their tone of voice, is something that needs to be targeted and tailor-made for each and every person in need of treatment. For years, treatment has been done by therapists and doctors, human beings who could slowly adapt to and learn how to deal with their patients and improve their social skills one at a time. People were under the impression that treating developmental or psychological disorders is a uniquely ‘human’ job, since only human beings were capable of adapting their methods of treatment to suit the individual patient. Only humans can teach each other to act and socialise in a correct or proper way. However, soon this may not be the case. One of the biggest issues with the status quo is that treatment is largely inaccessible due to the high demand but low supply of treatment (the limited number of psychiatrists or children’s doctors). As such, researchers such as Brian Scassellati, a robotics expert and cognitive scientist at Yale University, are in the process of finding a way to get around these shortcomings of ASD treatment. Scassellati has developed a number of social robots (Figure 3) to interact with children afflicted with ASD to see if an AI programme could be more effective than a human at teaching children to pick up social cues. The results he found were striking: after just one session with the robots, many of the children he had been working with were exhibiting forms of normal social behaviour: talking, laughing, and making eye contact during the Q & A session that followed the AI therapy. These milestones in the treatment of ASD, which would usually take some human therapists weeks or even months to accomplish, took the social robots just 30 minutes [5].

Figure 3: One of Brian Scassellati’s social robots. (Source: news.yale.edu) The reason behind the effectiveness of these robots is the subject of much debate. Scassellati, among many of his colleagues, believed that the human-like qualities of the robot the children were interacting with helped make children more responsive to its actions. However, after further testing, it appeared that there was no real difference between using a humanoid robot and a non-humanoid robot as the children responded in almost identical ways. On the other hand, there was a major difference in the quality of treatment between using robots and using a tablet/screen as a replacement. Children were shown to respond far worse to a simple screen projection, which suggested to Scassellati and his team that, while it did not matter how humanoid the robots were, it was the robot’s ability to respond to the child’s actions, without having the inherently ‘human’ qualities such as non-verbal communication, that were so overwhelming to the children in the first place, that made them so effective in their treatment [6]. 114


Since then, Scassellati has conducted many more in-depth studies into how these newly programmed social robots would be able to impact and help the children and families they were loaned out to. During a one-month-long study, he provided 12 families with a tablet computer filled with social games and a new robot called ‘Jibo’ which would provide constant and immediate social feedback to the children during and after their games. Over the course of this month long study, all 12 subjects were shown to have made significant progress in their social development. Many of the children were starting to make prolonged eye contact when talking, initiating conversations, and responded better to communication. However, this form of robot/AI based therapy is still in its nascent stages, as there is no definitive proof that therapy sessions would provide any permanent change in the children’s social skills. Scassellati later stated that “We wouldn’t expect to see any permanent change after just 30 days.”; however, he also believes that this field of study is “very promising”. Many experts in the field feel the same way, and believe that since human-led therapy sessions are both scarce in quantity and expensive, robot-based therapy could be used to enhance the effectiveness of the therapy sessions. In Scassellati’s own words: “Most families can’t afford to have a therapist with them every day, but we can imagine having a robot that could be with the family every day, all the time; on demand, whenever they need it, whenever they want it” [7]. 3.2 ALGORITHMS TO AID FUSION TECHNOLOGY RESEARCH One of the most pressing issues of the modern world is that of energy consumption. As nations develop, the human race as a whole has become more and more dependent on resources used to power our homes and industries. For the past two centuries or so, this has meant using large amounts of fossil fuels such as oil, natural gas, and coal. However, by the mid to late 1900s, alternative forms of energy production began to surface, including solar, wind, hydroelectric, and nuclear power. Currently, 14% of Earth’s electricity is generated by nuclear power plants. However this figure varies heavily depending on the country, as LEDCs (Less Economically Developed Countries) may only use nuclear power for less than 1% of their energy requirements while MEDCs (More Economically Developed Countries) such as France may use it for up to 75% of its electricity. The reasons why richer countries tend to gravitate towards nuclear power are obvious: nuclear power plants are far more efficient than conventional, fossil fuel-based power plants, and nuclear power is considerably more sustainable and generally less damaging to the environment. In this modern day and age, efficient and clean energy is the way forward for many of the world’s leading nations as they look for ways to maintain energy production or even increase it without contributing even more to the growing issue of global warming. However, the nuclear power stations used today are still ‘fission’ power plants, meaning they operate by splitting unstable atomic nuclei - typically Uranium-235 - into smaller ‘daughter’ nuclei (releasing energy while doing so). As nuclear fission is based on splitting large atomic nuclei into smaller parts, the byproducts of nuclear reactors are mainly radioactive in nature and can be deadly to living organisms if they are left exposed to the radioactive substances for too long. On top of this, the nuclear waste produced in such reactors have an incredibly long half-life, meaning disposal of this waste is both difficult and expensive. This, coupled with the violently reactive nature of the substances involved in nuclear fission, is what prevents many nations from fully embracing and adopting nuclear fission as a major form of energy provision. As a result, scientists have been attempting to develop an industrially viable form of nuclear fusion technology, which generates energy by fusing small, harmless nuclei together in the same way that stars in the universe do, providing a clean and just as efficient alternative to replace current fission technology. This would solve the biggest issue people currently have with nuclear energy: that it is dangerous and potentially disastrous, with historical evidence such as the Chernobyl Disaster and Fukushima Meltdown.

115


However, there have been several setbacks in the development of fusion technology. Research into fusion reactors has been going on since as early as the 1940s, but technical difficulties, such as the sudden loss of confinement of plasma particles and energy during the reaction, have held researchers back for decades. ITER, the leading authority on fusion technology and a collaborative project between China, the EU, India, Japan, South Korea, Russia and the USA, estimates that the first commercially viable fusion reactor will be available sometime after 2054. However, with the help of AI and deep learning programmes, this may actually be accomplished much sooner than expected. In the US Department of Energy’s Princeton Plasma Physics Laboratory (PPPL) and Princeton University, scientists are beginning to use AI programmes to forecast and prevent sudden and unexpected nuclear fusion reactions that may damage the tokamaks, the devices that confine them (Figure 4) [8] [9].

Figure 4: Depiction of fusion research on a doughnut-shaped tokamak enhanced by artificial intelligence [8] The deep learning code, the Fusion Recurrent Neural Network (FRNN), is built upon the massive databases provided by decades of research from the DIII-D National Fusion Facility and the Joint European Torus, two of the largest fusion facilities in the world. From the new data, FRNN is able to train itself to identify and predict disruptions on tokamaks outside of its original dataset (from Princeton’s facilities) using ‘neural networks’, which use mathematical algorithms to weigh the input data and decide whether a disruption will occur, or how bad the disruption will be, adjusting itself after every session for mistakes made. A disruption, caused by a rapid loss in stored thermal and magnetic energy due to growing instability in the tokamak plasma, can lead to the melting of the first wall in the tokamak and leaks in the water cooling circuits. Soon, FRNN will reach the 95% accuracy threshold within the 30 millisecond time frame required by researchers in the field of study, which means it will soon be a practical and critical part of fusion research in the near future. This new way of understanding and predicting disruptions, one of the key issues that have plagued fusion development for decades, may also evolve into an understanding of how to control them [10]. Julien Kates-Harbeck, a Harvard physics graduate and a collaborator of the FRNN project, stated that although FRNN is currently able to predict and help mitigate the devastating impacts of disruptions, the end goal is to “use future deep learning models to gently steer the plasma away from regions of instability with the goal of avoiding most disruptions in the first place.” However, moving from an AI-based predictive programme to one that can effectively control plasma would require much more work and comprehensive code which encompasses the first principles in physics. While this seems like yet another setback in a century-old quest for the so-called ‘holy grail’ of energy, Bill Tang, a principal research physicist in PPPL, states that controlling plasma is just “knowing which knobs to turn on a tokamak to change conditions to prevent disruptions” and that it is “in our sights and it’s where we [PPPL team] are heading” [11].

116


3.3 APPLICATIONS OF AI IN THE MEDICAL FIELD The main issue with the way surgeries are conducted in the modern world is how they can easily be impacted by human error. Operations are complex and convoluted procedures that require the operators to have a specific skill set for each and every patient they come across. Many of these surgical procedures involve making extremely precise cuts and incisions, leaving little to no room for error. As a result, the possibility of human mistakes impacting the success of a procedure is extremely high. The possibility of mistakes are further exacerbated by the high-stress environments surgeons and doctors work under as well as the long shifts they typically take. As a result, surgeries are less efficient and more timeconsuming than they otherwise should be. This is where AI-assisted surgeries come in. Even though these programmes are nowhere near sophisticated enough to complete operations on their own, they are more than capable of helping reduce a surgeon’s variations that could affect the outcome of the procedure and the patient’s recovery. Doctor John Birkmeyer, chief clinical officer at Sound Physicians, stated that “a surgeon’s skill, particularly with new or difficult procedures, varies widely, with huge implications for patient outcomes and cost. AI can both reduce that variation, and help all surgeons improve – even the best ones.” Programmes are able to help surgeons determine what is happening during complex surgical procedures by providing real-time data about the movements and cuts the surgeons make. This allows surgeons to work with a greater degree of precision and accuracy as they are able to adjust and adapt to the information they are given on the fly. According to a study, AI-assisted surgeries also result in fewer complications compared to human surgeons working by themselves [12].

Figure 3: An artistic depiction of the role of AI in surgery. (Source: Robotics Business Review) Even though this kind of technology is still in its early stages, it’s already seeing great success around the world as an aid to conventional forms of surgery and, when paired with robotic systems, can perform tasks such as suturing small blood vessels (as thin as 0.03mm-0.08mm). The possibilities of AI-assisted surgeries are seemingly endless. However, AI can also benefit the medical field in ways beyond just this. Another key development in the medical AI field is the creation of virtual nursing assistants. A significant expense of any hospital or healthcare facility is the money spent on nurses and caretakers of patients. From interacting with patients to directing patients to the most effective care setting, these workers deal with many aspects of patient treatment by giving physical support through treatments, as well as emotional support through basic interactions and care-taking duties. It has been estimated that nurses account for close to $20 billion USD annually in the healthcare industry, a figure that could be drastically reduced after the introduction of AI-based, virtual nursing assistants.

117


Virtual nursing assistants would resolve many of the issues that the current healthcare industry has to contend with. Similar to how mental health robots and surgical-assisting AI help pre-existing doctors and psychiatrists deal with their workload more efficiently, virtual nursing assistants (which operate 24/7) can monitor patients, interact with them, and help dispel worries patients may have about their own well-being. On top of their ability to interact directly with patients, these programmes are also able to help with administrative tasks such as scheduling meetings between patients and doctors and even prescribe medication for patients leaving the hospital. All things considered, the potential of these virtual nursing assistants are immense and could save the industry billions of dollars in the long run, as less money is needed to pay the nurses and care-takers, which used to be distinctively ‘human’ jobs.

4 SHORTCOMINGS OF AI AND ROBOTICS 4.1 LACK OF EMOTIONAL INTELLIGENCE One major area of concern in the development and use of AI is their perceived lack of humanity and human-like qualities. Many experts in fields that deal with interpersonal relationships or social interaction (including psychologists, therapists and doctors) believe that AI or social robots will be incapable of ever fully replacing humans in these fields. The gap between a programme designed to mimic humanity and true social interactions is seemingly insurmountable, which leads many to believe that AI programmes, no matter how comprehensive or well-made, will only be able to play a supporting role to humans [13]. However, this shortcoming of artificial intelligence may soon be dealt with. While it is true that current programmes cannot effectively identify human emotions or social cues in the same way humans can, breakthroughs in the research of ‘emotional AI’ with programmes that can detect and recognise emotions (a process called facial coding in market research) then react accordingly, may one day help artificial intelligence programmes step into these previously ‘untouchable’ fields. Experts believe that emotional AI could be used as a medical tool to help diagnose mental ailments such as depression and dementia, or be used in medical facilities as ‘nurse bots’ to monitor and support patients [14]. 4.2 SENTIENT PROGRAMMES

Figure 5: example of emotional detection software [13]

Perhaps the biggest worry people have about the development of Artificial Intelligence is the possible rise of sentient AI programmes. While our current level of technology is still several decades away from being able to create programmes powerful enough to pose the problem the general public envisions, the potential issues of sentient AI are still food for thought. For a programme to be classified as ‘sentient’, it must first fulfil two requirements: first, the programme must be capable of being self-aware, and secondly, the programme must be able to be classified as ‘conscious’. As of right now, humanity has already created several programmes capable of rudimentary self-awareness, many of which are capable of analysing and adapting to their surroundings without prompts from code. However, although humanity is capable of creating robots that can mimic a human’s consciousness and understanding of themselves/the outside world, we are still unable to make programmes capable of this without using data from the internet, or by simply learning and understanding consciousness by themselves [15]. 118


As humanity has just entered the AGI phase, it is unlikely that we will experience anything like what we see in films such as The Terminator or Ghost in the Shell (Figure 6). A survey conducted in 2016 revealed that the majority of experts in the field of AI research and computer science feel that it will take at least 50 more years for technology to develop to the point where AI will be able to replace skilled human jobs. Right now, most programmes operate based on memorising data, identifying patterns within them, and slowly learning to match new pieces of input data to the pattern they have identified, which is very different from how human minds operate (based on prediction rather than memorisation). Facebook’s head of AI development, Yann Lecun, stated that in terms of unfocused intelligence, AI programmes are currently behind rats. As a result of this, it seems more reasonable to see the rise of sentient AI as a potential long term issue after humanity reaches the ASI research stage; however, until then, no one is in danger of being overtaken or undermined by these programmes [16].

Figure 6: An example of a currently unachievable robot from The Terminator franchise (Source: Film Stories)

5 CONCLUSION To conclude, the potential of Artificial Intelligence programmes are limitless. As technology and our understanding of computer science develops, humanity will inevitably become increasingly dependent on the abilities of AI programmes. In the short term, we will see developments and improvements in industries such as healthcare and energy, which will help propel the human race forward. Currently, fears of AI becoming too influential or powerful, or even developing ‘sentience’ are unfounded, as humanity is still decades away from being capable of developing Artificial Superintelligence (ASI) programmes, which are truly sentient or self-aware. In the foreseeable future, the development and growth of AI-based fields of research should be monitored, but not restricted. The only real harm AI can do to humanity in the next 20-30 years or so is lowering the demand for certain jobs, which would become increasingly automated. The upsides to this are explosive economic, educational and healthcare developments. AI programmes can be a powerful and useful tool during this period of time, however upon reaching the Artificial General Intelligence phase (AGI), humanity needs to become more wary of these programmes and begin to place restrictions and limitations on their usage. Until that point, we should enjoy the benefits brought by these programmes to the fullest.

119


BIBLIOGRAPHY

[1] “The History of Artificial Intelligence” - Harvard http://sitn.hms.harvard.edu/flash/2017/history-artificial-intelligence/ [2] “The Evolution of Artificial Intelligence” - UBS https://www.ubs.com/microsites/artificial-intelligence/en/new-dawn.html [3] “AI in Movies, Entertainment, Visual Media”- Emerj https://emerj.com/ai-sector-overviews/ai-in-movies-entertainment-visual-media/ [4] “The State of AI in 2019”-The Verge https://www.theverge.com/2019/1/28/18197520/ai-artificial-intelligence-machine-learningcomputational-science [5] “How 30 Days with an in home robot could help children with ASD”-ScienceMag https://www.sciencemag.org/news/2018/08/how-30-days-home-robot-could-help-children-autism [6] “Artificial Intelligence and Autism”-Towards Data Science https://towardsdatascience.com/artificial-intelligence-and-autism-743e67ce0ee4 [7] “Improving social skills in children with ASD using a long term, in home, social robot” - ScienceMag https://robotics.sciencemag.org/content/3/21/eaat7544 [8] “Artificial Intelligence accelerates development of limitless fusion energy”- SciTechDaily https://scitechdaily.com/artificial-intelligence-accelerates-development-of-limitless-fusion-energy/ [9] “A Nuclear Powered World” -NPR https://www.npr.org/2011/05/16/136288669/a-nuclear-powered-world [10] “Fusion Energy Pushed back beyond 2050” -BBC https://www.bbc.com/news/science-environment-40558758 [11] “Plasma Disruptions : A task force to face the challenge” - Iter https://www.iter.org/newsline/-/3183 [12] “How AI assisted surgery is improving surgical outcomes” - Robotics Business Review https://www.roboticsbusinessreview.com/health-medical/ai-assisted-surgery-improves-patientoutcomes/ [13] “Emotion AI overview” - Affectiva [14] “13 Surprising Uses for Emotion AI technology”-Gartner https://www.gartner.com/smarterwithgartner/13-surprising-uses-for-emotion-ai-technology/ [15] “Researchers are already building the foundation for sentient AI” - VentureBeat https://venturebeat.com/2018/03/03/researchers-are-already-building-the-foundation-for-sentient-ai/ [16] “How far are we from truly human like AI?” - Forbes https://www.forbes.com/sites/forbestechcouncil/2018/08/28/how-far-are-we-from-truly-human-likeai/#3b494a2031ac

120


Bitcoin Explained Josiah Wu (Year 12, Churchill)

Seriously, what is this? In 2017, there was a huge frenzy surrounding Bitcoin. Not only did it attract attention from economists and investors, it also became a hot topic amongst the general public. In fact, throughout 2017, questions like “How to buy Bitcoin?” and “How to mine Bitcoins?” were in the Top Ten list of the most trending ‘How to’ queries on Google [1] and it even had its own subreddit [2]. But what is Bitcoin? To start, we need to understand that Bitcoin is, in fact, a type of cryptocurrency. Cryptocurrency is a digital currency; however, unlike the conventional currencies we know (such as US dollars or UK pounds), cryptocurrency is not controlled by any central authority. If I wanted to make a transaction with Bitcoin, the transaction would not be verified by any bank or government, but it would instead rely on a network of computers that govern transactions.

Figure 1: Depiction of a Blockchain (Source: matejmo, Getty Images)

Cryptocurrencies rely on a system called blockchain, which is an accessible public ledger that records all the valid transactions made between users. It is designed in a way such that it is nearly impossible for anyone to tamper with it. Blockchain consists of two parts: the ‘block’ and the ‘chain’. There is a long list of valid transactions within each ‘block’, and each block is labelled with an individual identity called ‘hashes’. The ‘chain’ connects one block to another to form a blockchain. Nobody owns or controls this public ledger. Instead, volunteers (known as Miners) update the public ledger by creating a new ‘block’ and connecting it to one of the old blocks, helping the system circulate and function. The first person to update the ledger is rewarded with approximately 12.5 Bitcoins (~US$100,000 after conversion, as of Feb 2019 [3]). However, updating the public ledger is a laborious job. When a deal is struck and confirmed, the transaction is announced publicly into the Bitcoin network. Miners must first gather and verify one megabyte worth of those transactions into a block, and then solve an extremely difficult cryptographic ‘puzzle’ before they can upload their block onto the blockchain. This whole process is known as ‘mining’.

CRYPTOGRAPHIC HASH FUNCTION To understand this ‘puzzle’, we need to first grasp the idea of a cryptographic hash function. This function converts a string of plain text into a hash, or a string of binary numbers (also known as ‘bits’). The one Bitcoin uses is the SHA-256, which is a common cryptographic hash function used for internet security. One characteristic of this cryptographic hash function is that every input has a unique output, and a minor change to the input affects the output drastically. For example, if we want to convert the word “Bitcoin” using SHA-256, it would output* a string of 256 bits which starts with these 16 bits: 1011010000000101... However, changing the input by replacing the upper case “B” to lowercase, “bitcoin” yields an output* which as you can see, already differs significantly in only the first 16 bits: 0110101110001000... 121


SHA-256 is also computationally infeasible to program in the reverse direction. That is, if given the output, it would be immensely difficult to determine the matching input even with the world’s most efficient computers.

PROOF OF WORK Cryptographic hash functions are used to generate hashes for each block. Each hash is dependent on several characteristics within its respective block; this includes its transaction history, its timestamp, and a matching input (which will be discussed later). Besides this, the hash of the previous block is also involved in the generation of the hash, hence every block is interconnected with each other. These linkages, therefore, form a ‘chain’ (Figure 2).

Figure 2: An example of a blockchain This is where the ‘puzzle’, as mentioned earlier, comes into play. The problem is to find a particular input such that it induces a hash with a specific amount of zeros at its beginning (Figure 3). This is known as the Proof of Work (PoW).

Figure 3: An example of a correct and incorrect input generating the correct hash and incorrect hash, respectively The required amount for Bitcoin is 30 leading zeros. What is the probability of finding such hashes in a single try? We can find this out by doing the below calculation:

½ x ½ x ½ x ½ … = (½)30 or 1 in 1.07 billion For comparison, the odds of winning a jackpot in Mark Six (a legal lottery in Hong Kong) is 1 in 140 million. This demonstrates that the chances of acquiring the correct input is remarkably slim. Since solutions to reverse-engineer cryptographic hash functions are yet to be found, Miners are left with no option but to use the trial-and-error method - randomly guessing the input until the correct one is found. Hence this system enables fair competition between the ‘miners’ and incentivizes voluntary updating of the ledger.

122


WHY IS PROOF OF WORK NECESSARY? Proof of Work is necessary because it inhibits anybody from altering the contents within the blockchain without getting caught. Imagine David, who is very greedy, wants to edit the transaction history so that Cameron pays him $450 instead of $45 (Figure 4).

Figure 4: Original transaction history that David wants to edit David hacks into the Bitcoin system to edit the transaction history. However, he soon realises that he has run into trouble: as the transaction history is changed, the hash is affected, therefore making the it invalid. This, in turn, affects the hash of the neighbouring blocks, as the previous hash is needed to generate a new hash (Figure 5).

Figure 5: David's attempt to edit the history In an attempt to reverse this, David then tries to recalculate all the matching inputs to ensure all the following hashes are valid, as if nothing has gone wrong. However, he has to redo all the Proof-of-Work of the next few blocks before anyone manages to do one so that nobody can tell the difference (Figure 6).

Figure 6: Blockchain PoW that David needs to redo As doing the Proof of Work of each block is extremely time consuming, there is a very small possibility of David redoing and finishing the PoW in time. In the end, David is caught red-handed.

123


From this scenario, we can, therefore, conclude that the Proof of Work system is vital as it minimises the likelihood of successful unauthorised changes within the blockchain.

ENSURING VALID TRANSACTIONS It was briefly mentioned that transactions are broadcasted into the Bitcoin network. But how can we ensure that no fraudulent transactions occur within the Blockchain? Digital signatures are therefore implemented; similar to hand-written signatures, digital signatures act as verification from the deal initiator to show that they have approved of this transaction. However, to prevent forgery of signatures, the scheme of private key and public key are borrowed from cryptography. Private key ensures that every signature is authentic and unpredictable. As the name suggests, it is only accessible to the deal initiator. A function is used such that it outputs a signature: Sign(Transaction Information, Private Key) → Signature The deal initiator has to use that signature to make the transaction valid. Once the transaction is confirmed, it is then sent into the Bitcoin network. The miners then have to check if the transaction is authorised, which is a process that involves another function: Verify(Transaction Information, Signature, Public Key) → True/False An output of ‘True’ indicates that the signature is correct and therefore authorised. The miner can then move on and include this transaction into their block.

CONCLUSION Despite the recent frenzy about Bitcoin, it is uncertain whether it will last in the foreseeable future. However, the system of Blockchain will be adopted for many other purposes, such as data sharing and cybersecurity. It is therefore my belief that Blockchain will play a vital role in running our future society. *SHA-256 is more commonly output as hexadecimal (base 16) https://emn178.github.io/online-tools/ sha256.html I therefore encoded a programme to convert hexadecimal to binary:

BIBLIOGRAPHY [1] https://www.telegraph.co.uk/technology/2017/12/13/bitcoin-mania-googles-top-searches-2017dominated-digital-currency/ [2] https://www.reddit.com/r/Bitcoin/ [3] https://www.investopedia.com/terms/b/block-reward.asp https://www.investopedia.com/terms/b/bitcoin-mining.asp https://www.youtube.com/watch?v=kZXXDp0_R-w https://www.youtube.com/watch?v=bBC-nXj3Ng4&t=1121s https://bitcoin.org/bitcoin.pdf 124


IMAGE CREDITS Front Cover: Photo by Luca Micheli on Unsplash 'Science and the Senses' Section Cover: Photo by Anton Darius on Unsplash 'Concepts in Science' Section Cover: Photo by Jerry Thomas on Unsplash 'Application of Science' Section Cover: Photo by Alec Favale on Unsplash Back Cover: Photo by Paul Gilmore on Unsplash


Discovery consists of seeing what everybody has seen and thinking what no one has thought. - Albert Szent-Gyรถrgyi 1


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.