The Scientific Harrovian - Issue VII-i, December 2021

Page 1

Issue VII-i

Scientific Harrovian Possible Alternatives to Space Rockets The Beauty of Chaos Fake Trees with Real Potential

Surrounded by Science


2

About the Scientific Harrovian The Scientific Harrovian is the Science Department magazine, which provides a platform for students to showcase their research and writing talents, and for more experienced pupils to guide authors and to develop skills to help them prepare for life in higher education and beyond. Uncited images from Unsplash


This incredible Seventh Edition of the Scientific Harrovian is a product of an exceptional team’s hard work and scientific passion. They have all created something to be proud of and cherish, as well as inspire others with. Under the direction of Annie Kim, the Scientific Harrovian’s Editor-inChief, the camaraderie and community spirit of the team has been like no other, and this shines through in the quality of this edition.

I have been inspired by the plethora of articles on the theme of ‘Surrounded by Science’, a humble reminder that Science is everywhere we look! Written and illustrated from Year 7 to Year 13 pupils, the team are all united by their desire to effectively communicate Science to the pupil body. Whether you would like to discover the ‘Beauty of Chaos’ or would like to explore the potential of ‘mRNA vaccines’, there is something for everyone in this edition. I hope that whichever articles you read or illustrations you admire that you feel as inspired as I have.

Kind regard,

Siobhan McCrohan Head of Biology

3

Welcome to Issue VII-i of the Scientific Harrovian! Our articles this year celebrate the contributions of both past and future scientific innovations in improving various aspects of our lives, especially when mRNA vaccines have played such a crucial role in restoring relative normality into everyday life.

My thanks go to our incredible team of writers, editors and illustrators who have contributed their time and energy to make another issue of the Scientific Harrovian possible. We also welcome back both old and new members in our Editorial Team: Judy Sheng, Deputy Editor-in-Chief; Isabel Chau, Chief Design Officer; Jenny Kim, Deputy Chief Design Officer; Jasmine Wong and Alyssa Wong, the Biology Head Editors; Emily Tse and Christine Li, the Chemistry Head Editors; Andrei Hui and Helen Ng, the Physics & Technology Head Editors; and Gauri Ranjan, Marketing Director.

My deepest thanks go to Judy Sheng, who led the team as Deputy Editor-inChief with leadership, organisation, and commitment. I hope you will all be able to see that the countless hours of work she put into this edition in a busy term have not gone to waste, and I want to congratulate her once again on her impressive achievement as a new student to the school to make such an excellent edition possible.

Enjoy!

Annie Kim Editor-in-Chief


4

Welcome!

It has been a wonderful few months getting to know everyone this year and seeing their efforts pay off with our publication this term! We are delighted to introduce you to Issue VII-i of the Scientific Harrovian that comprises of articles in various subjects on frontline issues and innovative technology.

Dear all,

Welcome to Scientific Harrovian VII-i! This year’s theme is Surrounded by Science, and the issue showcases nine unique articles that explore the latest innovations and ideas across the fields of Biology, Chemistry, Physics and Technology.

It has been fascinating to watch the Scientific Harrovian grow over the past year alongside Annie, who continued her excellent work in leading us. This year, our Editorial team is joined by nine new members, adding more colour and creativity to the team. I want to thank them for giving up their Wednesday lunchtimes and dedicating their time to edit and put this issue together over the past few months.

A massive thank you to all of the illustrators for the contributions of their artworks. A special mention goes to Jenny Kim, our Deputy Chief Design Officer, for taking on this new role and managing the challenging workload of the new school year with the creation of this year’s issue.

Hope you enjoy,

Isabel Chau Chief Design Officer

As a new member of the team, I cannot stress enough my amazement at the talent, curiosity, and passion I have seen along the way. I heartily thank all our contributors who have given up their precious time in this busy academic year, putting in a lot of hard work to bring out our publication. A special thanks to Jenny for working her magic in designing and putting this issue together. And, my deepest thanks to Annie for trusting me with this responsibility and for all her help along the way despite the busy university application season. None of this would have been possible without our amazing team in Scientific Harrovian.

I continue to be in awe of the exceptional work from all our writers and illustrators, and I hope you find them as inspiring and enlightening as I have!

Judy Sheng Deputy Editor-in-Chief


5 THE TEAM

Editor-in-Chief

Annie Kim Year 13, Wu

Deputy Editor-in-Chief

Judy Sheng Year 12, Gellhorn

Chief Design Officer

Isabel Chau Year 13, Gellhorn

Deputy Chief Design Officer

Jenny Kim Year 12, Anderson

Marketing Director

Gauri Ranjan Year 11, Wu

Biology Head Editor

Jasmine Wong, Alyssa Wong Year 12, Keller, Year 12, Anderson

Chemistry Head Editor

Emily Tse, Christine Li Year 12, Keller, Year 12, Gellhorn

Physics Head Editor

Andrei Hui Year 12, Churchill

Technology Head Editor

Helen Ng Year 11, Gellhorn


6

CHEMISTRY

WRITERS Kevin Liew

Kevin Wu

Daniel Kan

Year 12, Peel

Year 10, Sun

Year 10, Shaftsbury

Bernice Ho

Jasmine Wong

Emily Tse

Year 8, Fry

Year 12, Keller

Year 12, Keller

Cheney Sang

Hanson Wen

Warren Zhu

Year 11, Sun

Year 10, Peel

Year 13, Churchill

Bethany Kerr

Mya Ip

Zi Yan Huang

Year 10, Wu

Year 6, Nightingale

Year 8, Fry

Mary Zhang

Estelle Chan

Reika Oh

Year 10, Anderson

Year 12, Gelhorn

Year 12, Gellhorn

Callum Sanders

Joy Chen

Ethan Lan

Year 11, Shaftesbury

Year 11, Gellhorn

Year 8, Shackleton

ILLUSTRATORS


7

CHEMISTRY

EDITORS Jonathan Lee

Venus Lee

Valerie Ho

Year 13, Peel

Year 12, Gellhorn

Year 10, Anderson

Judy Sheng

Matthew Chin

Bianca Mak

Year 12, Gellhorn

Year 12, Churchill

Year 10, Wu

Alyssa Wong

Tobey Poon

Tisha Handa

Year 12, Anderson

Year 12, Churchill

Year 10, Anderson

Jasmine Wong

Kevin Liew

Henry Chan

Year 12, Keller

Year 12, Peel

Year 6, Shackleton

Carol Yeung

Jasmine Kwok

Mya Ip

Year 12, Keller

Year 12, Wu

Year 6, Nightingale

Christine Li

Helen Ng

Emily Xu

Year 12, Gellhorn

Year 11, Gellhorn

Year 6, Nightingale

Emily Tse

Natasha Chan

Joy Chen

Year 12, Keller

Year 12, Anderson

Year 11, Gellhorn


8 Enviromental Science and Chemistry

10

Fake Trees With Real Potential Jasmine Wong

20

Against Method-A Quasi-Book Review Warren Zhu

26

Saving Our Planet: Net-Zero Carbon Emissions Bernice Ho

Biology

36

Epigenetics: Beyond The Genome Hanson Wen

48

mRNA Vaccines: A Possible Cure For Cancer Kevin Wu

Physics and Technology

58

Biomimicry: The Predestination Of Innovation Emily Tse

71

Brain Computer Interface: The Past, Present and Future Daniel Kan

84

The Beauty Of Chaos Kevin Liew

96

Possible Alternative To Space Rockets Cheney Sang


CHEMISTRY

Enviromental Science and Chemistry

9


10

CHEMISTRY

Fake Trees with Real Potential Jasmine Wong Between the end of the ice age and the Industrial Revolution, the atmospheric concentration of CO2 was at 280 ppm. Since the Industrial Revolution in the 18th century, carbon emissions have increased drastically from 0.01 billion metric tonnes in 1750 to 36.44 billion metric tonnes in 2019. This has led to an increase of CO2 concentration to an average of 410 ppm, which is an increase of 46%. As a result, the rate of climate change and global warming has rapidly increased owing to the extreme level of pollution. Hence, as a solution, why not create artificial trees that can mimic the role and functions of a tree with increased efficiency in photosynthesis?

Illustration by Jenny Kim

Reasons for the idea of artificial trees Carbon Dioxide is an odourless and colourless greenhouse gas that makes up 0.04% of the gases in the atmosphere. 1. The Sun emits electromagnetic (EM) radiation (including gamma rays and ultraviolet waves) in the form of waves travelling through space. Most EM waves enter the Earth’s atmosphere whilst some are reflected. 2. These sun rays can then take two paths: a. Sun rays are absorbed by the

earth’s surface, warming its surface. b. Some are re-emitted as longer wavelengths with lower frequencies of infra-red radiation. Most of these re-emitted waves are supposed to be reflected back into space. However, excessive volumes of greenhouse gases absorb the infrared photons


CHEMISTRY When infrared photons hit the carbon dioxide molecule, it causes the covalent bonds between the carbon and oxygen molecules to vibrate in one of three ways (modes): symmetrical stretching, asymmetrical stretching and bending. This is due to the natural frequency at which the bonds in CO2 molecules vibrate, which corresponds to the specific frequency of infrared radiation. When infrared active photons (energy) reaches CO2, the infrared radiation is transferred to kinetic energy causing the asymmetrical stretching or bending of carbon dioxide. This produces a change in the dipole moment of CO2, causing CO2 to be temporarily polarised as the linear shape of carbon dioxide is distorted (Figure 1). The V shape of the carbon dioxide molecule and the asymmetry in asymmetrical stretching means that vectors do not cancel out. This results in a higher temperature since the temperature of the gas is a measure of the average kinetic energy per particle. Hence, diatomic molecules such as nitrogen molecules only have one mode (stretching and compressing their bonds) and are identical and symmetrical in the distribution of charges. As a result, this inhibits the molecules from creating a change in dipole moment and therefore cannot absorb infrared radiation.

organisms that thrive off of shallow water. It can cause catastrophic destruction including imbalances in ecosystems as a result of damaged habitats from flooding or drought leading to animal migration.

11

There have been various attempts to reduce carbon emissions and their footprint: 1. The use of Scrubber devices that have been fitted to the chimneys in different pilot projects around the world so that the greenhouse gas produced during fossil fuel burning can be removed from the exhaust emissions; 2. The Climate Change Act 2008; 3. Specific brands attempting to reduce their carbon footprint (e.g. Levis is using recycled plastic to make jeans, resulting in reduced waste and plastic in oceans as well as reducing the volume of water used to manufacture a pair of jeans by up to 96%, saving more than 1 billion litres of water in product manufacturing) As a result of the excess emission of carbon dioxide globally, the rate of photosynthesis of plants can no longer keep up with the rate of carbon production by humans. This, therefore, has acted as a catalyst for the development of artificial trees.

Additionally, collisions between carbon dioxide molecules also cause a transfer of energy from the energised molecules to other molecules. The faster the motion of a molecule, the greater the increase in thermal energy and the greater the earth’s surface temperature. The increasing volume of carbon dioxide in the atmosphere has caused serious concerns for the environment, as the increase in global temperature has caused the melting of glaciers and hence the rise in sea levels. This poses a danger not only to countries located on low lying coastlines such as Hong Kong but also to

Figure 1: A demonstration of the movement of carbon dioxide molecule when it absorbs infrared radiation


12

BIOLOGY The idea of artificial mechanical trees Professor Klaus Lackner, director of Arizona State University’s Center for Negative Carbon Emissions, announced the commercialisation of carbon-capture technology. The idea behind his synthetic trees came from his early observation of the build-up of carbon dioxide in the atmosphere. He believed that the technology needed to deal with this problem does exist; however, because nobody ever had enough incentive to create it, it has not been made. Dr Lackner wanted to treat the root cause of climate change using direct air capture. According to an article written by Arizona State University, Lackner’s technology is thousands of times more efficient at removing CO2 from the air per artificial tree than a regular tree. It captures carbon dioxide from the atmosphere through the movement of the wind blowing into the device, which then emulates a tree’s ability to photosynthesise. Carbon dioxide can also be sequestered and sold in the form of synthetic fuels as it gets buried or used as an industrial gas. This system separates itself from other carbon-capture technology: 1. It is a passive process and, 2. It allows for lower costs whilst allowing it to be a scalable and commercially viable solution.


BIOLOGY

Illustration by Reika Oh

13


14

BIOLOGY How does it work? The mechanical tree prototype is made of 12 columns which can remove a minimum of 1 metric ton of carbon dioxide per day. When the tree extends its ‘branches’, the white sorbent-filled disks in the ‘leaves’, which is an ion-exchange resin (IER) and polyvinyl chloride (the binding agent), absorb the carbon dioxide. In this case, the ion exchange resin is sodium carbonate. It is able to do this as it is an anionic (positively charged) exchange resin that has a strong affinity to bind with carbon dioxide (which has partial negative charges from oxygen) as a bicarbonate ion when in a dry state.

1

Capture the Carbon 1. Tiles on the ‘tree’ extend upwards, from a compressed position, to 10 metres high and become saturated with flue gas from ambient air carried by the wind 2. Tiles are lowered mechanically into a chamber at the base

Figure 2: A visual representation of what the Carbon-Capture technology does when coming in contact with carbon dioxide (source: Cronkite News by Sara Weber, Brooke Stobbe and Ty Scholes)

2

Carbonator 1. Flue gas contains a mixture of gases for example nitrogen dioxide and sulfur dioxide which make it impure. This restricts the carbon capture to directly capture carbon dioxide and sequester it. 2. The white sorbent and water from the atmosphere are therefore required in order to isolate carbon dioxide later. This forms the sodium bicarbonate (white solid) which is collected on a disk and moved towards the section of thermal regeneration as other gases such as sulfur dioxide is sucked out using a vacuum.


BIOLOGY

3

Thermal Regeneration 1. The disks with sodium bicarbonate are lowered into the heated wet container where the carbon dioxide is released 2. Carbon dioxide is then heated to evaporation to separate from H2O in order to obtain a 95% pure sample 3. It is then condensed and flows down towards a black tank where it is cascaded through tanks and pumps with up to pressures of 150 psi in order to convert pure carbon dioxide for other purposes. 4. 1% of the total gas inputted will be carbon dioxide of which only 90% can currently be extracted.

Figure 2: A visual representation of what happens in the two steps

4

Repeats as mechanical tree extends upwards again.

15


16

BIOLOGY

Problems that arise with artificial trees 1. The Social Cost: There is not enough space to store it securely in saline aquifers or oil wells. Solution: Geologists have come up with an alternative. Rock peridotite (which can absorb large volumes of carbon dioxide) can seal the absorbed gas as a stable magnesium carbonate mineral, but despite its potential, it could potentially damage more natural resources. Lackner believed the gas should be petrified and used for transport vehicles. Carbon dioxide can react with water to produce carbon monoxide and hydrogen (syngas) because it can be readily turned into hydrocarbon fuels such as methanol or diesel. The process requires an energy input, but this could be provided by renewable sources such as wind energy. 2. The Moral Cost: Because of the invention of technology, more people are more susceptible to becoming less conscientious to curb their emissions. Although more carbon is being offset by the fake trees, people may potentially produce more carbon dioxide, creating a vicious cycle. Furthermore, there is also the fear of playing with natural systems. Geoengineering could cause certain natural processes such as weather patterns and rainfall to change: these effects cannot be reversed or offset, making it dangerous to use fake trees. 3. The Carbon Cost: Each synthetic tree requires energy to operate; it generates some carbon itself if plugged into the power grid. Lackner calculated that, for every 1000 kg of carbon dioxide the synthetic tree collects, it emits 200 kg, so that 800 kg are considered the true collection. [3] 4. The Economic Cost: Not all countries have the financial resources or capability to shift to more sustainable energy sources, let alone invest

in an expensive project. For example, because of the UK’s geological position, it is able to rely more on wind-generated energy. It has a natural advantage when it comes to offshore wind power because of the North Sea. The economic resources of the UK have also allowed for a series of subsidies to build new wind farms. However, countries such as India have other issues, such as social unrest and high rates of poverty, that need immediate attention, so reducing carbon emissions within a country may not be considered a priority.


17

BIOLOGY

17

Application of fake trees As previously covered, artificial trees are said to be thousands of times more efficient than natural trees at removing CO2 from the atmosphere. Dr Lackner’s team based in Arizona, known as Carbon Collect Limited, is working on providing unlimited, low cost and clean solar energy for direct air capture. They plan to begin deploying small-scale Mechanical Trees in preparation for the mass production of large-scale direct carbon capture devices. Additionally, to combat the urban heat island effect, many scientists have invented solutions building on Professor Lackner’s ideas. Designs such as the TREEPOD and CLECanopy were devised with the intent to enhance and maximise the benefits of trees. Specifically, designers Mario Caceres and Cristian Canonico designed the TREEPOD for the SHIFTboston Urban Intervention contest. It is able to remove carbon dioxide and release oxygen using a carbon dioxide removal process by Lackner’s technology called ‘humidity swing’. It is powered using solar energy panels and an interactive seesaw that children can play on and convert kinetic energy into electrical energy. This energy can then be used to power the air filtration and night lights for increased safety.

They plan to build the TREEPOD out of only recyclable plastic from drink bottles, which has added advantages of reducing pollution through using plastic in the sea. This design harnesses the biomimicry of both the Dragon Blood tree, imitating its umbrella shape to optimise shading surface, wind flow and solar energy panel surface area and the human lung filled with the resin in which is able to absorb CO2. The large surface area of the branches allows for higher efficiency of ‘gas exchange’. [10]


18

BIOLOGY

Conclusion

Figure 4: A picture showing the cross-section of the TREEPOD design (Source: Inhabitat, image courtesy of Mario Caceres and Cristian Canonico)

For the future of the world, money should not be the reason limiting these innovative ideas and projects, as fake trees can help reduce the effects of carbon emissions and restore (to an extent) some balance into the world again. However, we still should not be overly reliant on fake trees as natural trees bring other beneficial aspects, such as providing shade and food for wildlife. Creating fake trees alongside the continuous plantation of real trees could be a more sustainable way of slowing down the climate crisis.

1 18


BIOLOGY Bibliography [1] Tiseo, Ian. “Historical Carbon Dioxide Emissions from Global Fossil Fuel Combustion and Industrial Processes from 1758 to 2020.” Statista, Ian Tiseo, 5 Jan. 2021, www. statista.com/statistics/264699/worldwide-co2-emissions [2]“Powerful ‘mechanical Trees’ Can Remove CO2 from Air to Combat Global Warming at Scale.” Arizona State University News, Arizona State University News, news. asu.edu/20190429-solutions-lackner-carbon-capturetechnology-moves-commercialization. Accessed 4 Feb. 2021. [3] Zyga, Lisa. “Synthetic Tree Captures Carbon 1,000 Faster Than Real Trees.” Phys Org, 9 July 2009, phys.org/ news/2009-07-synthetic-tree-captures-carbon-faster. html. [4] Cantieri, Janice. “ARTIFICIAL TREES COULD OFFSET CARBON DIOXIDE EMISSIONS.” Climate Change, climatechange.medill.northwestern.edu/2016/11/29/ artificial-trees-might-be-needed-to-offset-carbondioxide-emissions. Accessed 4 Feb. 2021. [5] Siegel, R. P. “Building the Ultimate Carbon Capture Tree.” ASME, 17 May 2019, www.asme.org/topicsresources/content/building-ultimate-carbon-capturetree. [6] Vince, Gaia. “Scientists Are Looking at Ways to Lower the Global Temperature by Removing Greenhouse Gases from the Air. Could Super-Absorbent Fake Leaves Be the Answer?” BBC Future, 4 Oct. 2012, www.bbc.com/future/ article/20121004-fake-trees-to-clean-the-skies. [7] Fransen, Bas. “Can Fake Trees Be a Part of the Climate Change Solution?” EcoMatcher, 3 May 2020, www.ecomatcher.com/can-fake-trees-be-a-part-of-theclimate-change-solution/?v=69e1aafeccc5. [8] “What Properties of Carbon Dioxide Make It a Greenhouse Gas?” Chemistry Stack Exchange, chemistry. stackexchange.com/questions/81242/what-propertiesof-carbon-dioxide-make-it-a-greenhouse-gas. Accessed 4 Feb. 2021. [9] “What Are the Properties of a Greenhouse Gas?”

ACS Chemistry for Life, www.acs.org/content/acs/ en/climatescience/greenhousegases/properties.html. Accessed 4 Feb. 2021. [10] Zimmer, Lori. “TREEPODS: Carbon-Scrubbing Artificial Trees for Boston City Streets.” INHABITAT, 15 Feb. 2011, inhabitat.com/treepods-carbon-scrubbingartificial-trees-for-boston-city-streets. [11] UCAR, NCAR. “Carbon Dioxide Absorbs and ReEmits Infrared Radiation.” Carbon Dioxide Absorbs and Re-Emits Infrared Radiation | UCAR Center for Science Education, scied.ucar.edu/carbon-dioxide-absorbs-andre-emits-infrared-radiation#:~:text=Some%20time%20 later%2C%20the%20molecule,carbon%20dioxide%20 molecule%20stops%20vibrating.&text=This%20 ability%20to%20absorb%20and,effective%20 heat%2Dtrapping%20greenhouse%20gas. [12] Carlson, Carrie, and Alex Ebbenis. “A Look at Activated Carbon Thermal Regeneration.” FEECO International Inc., 7 Apr. 2021, https://feeco.com/a-lookat-activated-carbon-thermal-regeneration/. [13] Green, David A., et al. “CARBON DIOXIDE CAPTURE FROM FLUE GAS USING DRY REGENERABLE SORBENTS .” Osti, 20 Oct. 2021, https://www.osti.gov/servlets/ purl/924032#:~:text=Sodium%20carbonate%20reacts%20 with%20CO2,quantities%20of%20CO2%20and%20H2O. [14] Shi, Xiaoyang, et al. “Kinetic Analysis of an Anion Exchange Absorbent for CO2 Capture from Ambient Air.” PLOS ONE, Public Library of Science, 22 June 2017, https://journals.plos.org/plosone/ article?id=10.1371%2Fjournal.pone.0179828. [15] Lackner, ​​ Klaus. “Passive Direct Air Capture Technology: Homepage.” Carbon Collect, 2 Aug. 2021, https://mechanicaltrees.com/.

19


20

CHEMISTRY

Against Method— A Quasi-Book Review Warren Zhu There are entertaining books that lack substance. There are substantial books that put you to sleep. [1] There are entertaining books that have substance, but do nothing more than confirm your prejudices. Then there are books that are entertaining, packed with facts, and leave you wordless after you have finished, thinking: “Golly! My world has changed. I see everything differently.” I would like to convince you that Against Method is such a book.


CHEMISTRY What is Feyerabend on About? How do we do science? How does science change and progress? What is the best way to do science?

21

Illustration by Estelle Chan

Paul Feyerabend—intellectual renegade, “irrationalist”, “anarchist”—has a lot to say about these questions. He is important for the fact that he opposes many seemingly “common-sensical” dogmas (commonsensical because they are so much a part of the intellectual milieu, the ideological matrix of our times, that we, like the fish who knows not water, see them not), and makes a compelling case for its opposite, an “Anarchist Theory of Knowledge.” I am not in a position to judge whether he is correct, but it is always healthy to listen to a critic of the mainstream to escape our parochial understanding.

What is Feyerabend Against? It is clear from the title: Feyerabend did not like Method. Method, for him, is one fixed way of doing things, one set of rules that is universally valid, infallible, and has to be universally followed, and one way to approach scientific progress that is held above everything else. Phrases such as “follow the Science” or “the scientific method,” exemplify the attitudes that Feyerabend rallies against. Instead, he advocates a pluralist, anarchist conception of science that is (at least according to him) closer to how science actually progressed in history, more humane, and can better promote scientific progress as a whole. [2] More specifically, Feyerabend uses the test case of the empiricist conception of “the scientific method” construed as a rigorous gathering of facts and the construction of

a theory on top of the facts, and its more sophisticated version, the deductivenomological-hypothesis-falsification method of Karl Popper (both conceptions are still popular today), to make his point. The conclusion that he wants to move towards, however, is not that “everything goes!”—you can just ‘vibe’ [3] around and do whatever you want when you are practising science, and no methodology is better than the other—but that one has to adapt the methodology to the circumstance, and sometimes, as will be shown in the case of Galileo, irrationality may be a better way to proceed than rationality. [4]


22

CHEMISTRY The Case of Galileo It is Feyerabend’s extraordinary claim that the standard narrative all school kids are told about the saga of Galileo—how he heroically (and, perhaps, quixotically also) stood for truth and reason, sacrificing his life for them, against the church’s dogmatism and irrationality—is in flagrant contradiction to historical evidence. Instead, Galileo is a) an imaginative madman, specializing in speculation, who overrode reason for elegance, and b) a brilliant propagandist who deliberately employed technically problematic (but convincing) metaphors, adding ad hoc hypotheses as he encountered unexplainable observations, presenting, misleadingly, a novel, anti-common sense theory as the natural interpretation. [5] The church, Feyerabend announces (almost triumphantly), was closer to reason— conceived in Galileo’s time and ours. Below is a list of historical observations that Feyerabend musters to support his claim: There were no reasons to believe in Galileo’s observations on the telescope. It yielded images that are obscure, indistinct, and unconvincing. It is, also, extremely inaccurate—Galileo’s drawings of the moon do not match with modern day observations. Further, there were difficulties regarding the brightness of the stars and the changes in their size when they were rotating around the sun that Galileo did not resolve. It was plain bravado and irrationality for Galileo to reject ordinary sense perception in favor of such rudimentary equipment. [6] Galileo introduced ad hoc hypotheses (almost a dirty word in the philosophy of science, because it, supposedly, is unscientific to add ad hoc anything) to explain a certain phenomenon—and those ad hoc hypotheses became a part of the system (circular inertia), during the course of the falling-stone experiment. It is true that Copernicanism and Galileo’s

theory lined up. But both had tremendous problems. Galileo was not justified (by the standards of rationality) to decide that their mutual confirmation signaled the triumph of his theory over the standard model. One case in point: Theoretically, Copernicus’ model is not as good as Ptolemy’s in terms of calculations. Further, though it is true that Copernicus’ theory was, on some fronts, more explanatory, most took it as a model for prediction— which is the rational thing to do–whereas Galileo took it as the physical truth. Similarly, Copernicus rejected the sophisticated Aristotelian account of Mathematics, where Maths is just an approximation, and took mathematical calculations as directly presenting (rather than re-presenting) physical reality. Copernicus made two reality assumptions: 1. Formally, everything should not be nice in circles, 2. inner connections lead to reality. Both are not justified by reason. The Church, in reality, did accept new theories, and adapted the Bible to them (e.g. the roundness of the earth) when there is proof for the theory—a traditional standard of rationality which is not present in Copernicus’ or Galileo’s theories. Ergo, Galileo triumphed because reason did not reign, both in rejecting other people’s theories and in accepting his own. The church, in fact, acted— both in today’s and the ancient standard—more rationally than Galileo.


CHEMISTRY

23

Takeaways from Studying Galileo, and Conclusion We can now derive certain results from the account of Galileo. First, the problems with standard scientific methods and theories of progress: • Naïve empiricism (theory must follow data to the point): Aristotle was an arch empiricist, and Ptolemy used carefully collected data. But this did not prevent them from being wrong. • Sophisticated empiricism (data should be used to revise theory): Ptolemy’s system was empirically closer than Copernicus. This did not prevent, in retrospect, the Copernican system from being better. • Falsificationism (we should try to create more and more falsifiable theories and test our theories through falsification): There is no decisive falsification. In the Copernican revolution, “falsifications [and]… new observations played a role. But both were embedded in a complex pattern of events which contained tendencies, attitudes, and considerations of an entirely different nature.” Further, from one theory to the next, scientists focus on new problems and forget about older ones that become obsolete. Falsification leads one fixed on a set of problems that may be of no utility. [7] • Conventionalism (we should try to have a theory that is the most elegant and least cumbersome; applied to the current scenario is the idea that the Ptolemaic system got too complicated and had to be rejected): Copernicus’ system had just as many circles as Ptolemy. There were no straightforward simplifications. • Crisis theory (science progresses when

the accumulation of new evidence sends a field into a crisis, leading them to question their own theory): There was no crisis for the Copernican revolution. People criticized not the basic theory, but the data. The illusion of crisis was created retroactively after people began to accept the Copernican system. What, then, is to be done? Below are Feyerabend’s recommendations, which will serve as a good conclusion. Science has to be a bit crazy, all over the place. One should let, so to speak, 100 flowers bloom. For a plural, rhizomatic spanning out into possibilities is the only way for true advances beyond consolidation and incremental progress. Galileo—with his irrationality, is a case in print. There should be a willingness to entertain all hypotheses—even ones bordering the metaphysical, for theories in their infancy cannot have too much empirical content. This is not to say that scientists have an obligation to learn everything, but that they should not be restricted to following a single path, and move to where their interests lead them, for there is no way to do science. All methods could be suitable—one must only try, and not restrict others from trying. Attempts to reinforce a certain conception of science merely makes it sterile. This is not to say that one shouldn’t follow any of the methods listed above (naïve/ sophisticated empiricism, falsificationism, conventionalism, etc.), but that all methods work only some of the time.


24

CHEMISTRY

But then—one may ask—what is the standard of success? There is, F will reply, no common standard. The question is absurd. Scientists seem to set stringent methodologies and standards for themselves, but they—when one looks into the empirical evidence—never follow it when they actually start doing science. The standard can only be developed internally, during the course of scientific investigation, as one plays and dances in the boulevard of discovery. Science is, fundamentally, receptive play. Scientists, therefore, should not ignore general education. Instead, it would be for the best (so Feyerabend goes) if they pursue some other study outside of science. If for nothing, to just be able to look at science from the outside, and perceive its presuppositions without being enmeshed in it. [8] Herein lies Feyerabend’s hopeful message: Only by developing alternative traditions can one escape the tyranny of one. Only when one abandons Method (with a capital M) can science flourish. Only then, can there be a science that is more human, and more humane.


CHEMISTRY Bibliography [1] I fondly remember waking up at 4 a.m. with the lights on, my glasses on my face, and Thinking, Fast and Slow on my hand—I only wanted to read briefly after dinner. [2] Humanity, in some sense, is shown by the authors he cites in the book. It is a rarity to see Marx, Lenin, and Hegel to be quoted more than any other philosophical rock stars in a book on the Philosophy of Science (Marx and Hegel’s philosophy now seen basically as pseudo-scientific, like Freud’s). Feyerabend gives his reasons in the book: He is investigating the logic of scientific revolutions, and it has a great affinity to political revolutions, hence finding insight in the above triumvirate. [3] I owe this elegant expression to Bess Chau when we were discussing Wittgenstein’s rule-following skepticism. [4] One can skip this long footnote if one is not interested in Feyerabend’s more technical critique of the mainstream method (though they are, I believe, incredibly interesting). First, he attacks the consistency condition, that a new hypothesis must be consistent with the accepted theories before we spend time testing it. The problem, first, is that every measurement has a certain noise (margin of error), and therefore there can be two inconsistent hypotheses that fits all the facts. Second, and more interesting, is that the “facts are constituted by old ideologies”. What we see as indisputable is always indisputable under the accepted theory. (Hence, in the appendix, Feyerabend will attack the distinction between observation terms and theoretical terms, observation terms being the bare facts, stripped away from all theoretical baggage. The problem with this distinction is that what we take as bare facts are conditioned by theory—or, to use a fancier formulation, all theories are metaphysical.) Then, Feyerabend attacks the idea that science must include induction (mathematics, in contrast, is seen as completely deductive in the traditional conception, though Lakatos—the Popperian who stimulated Feyerabend to write Against Method—argued in Proofs and Refutations that Maths rely more on induction, trial and error, intuitive leaps, and falsification than deduction)—induction being the collection of facts and then finding the theory that fits the facts—and promotes counter-induction as a remedy. To explain his argument we would need, other than what is introduced in the parentheses—that there are no facts because theory conditions what we classify as facts—two more concepts: natural-interpretations and observational language. Natural-interpretation is when one interpretation of the phenomenon is raised to the status of a fact because of the intertwining of explanation and phenomenon from natural conditioning (by participating in a certain theoretical discourse) to the point where it is almost impossible to separate the two. Observational language is the language we use to describe

what we observe, which, in itself, changes what is observed (this one being a tentative conclusion that is likely, but difficult to prove— though many, including Kuhn, agree with Feyerabend on this point). Natural-interpretation and observational language’s distortion of the phenomenon (as well as the problem that a theory determines the criteria from which one selects facts) provides concrete reasons for the impossibility to reach facts conceived as observation terms, which means that a new theory cannot be rejected simply because it does not induce (does not fit with all the facts). Instead, facts should sometimes be questioned. The above discussion also provides reason for Feyerabend’s pluralism: only when we immerse ourselves into another system of thought would it be possible for us to break out of our own assumptions (to see our natural-interpretations as interpretations, and see how our observational language distort what we perceive). Moreover, counter-induction is needed not only because the facts may be wrong, but also that a baby theory could have points of weakness, because it has not been given enough time to resolve its own contradictions/shortcomings (therefore also a reason to reject the requirement that a new theory has to be consistent). Therefore, leeway has to be given to a baby theory for it to grow and prove its worth. Sometimes we have to take the bullet (of a decrease in empirical content and a receding into more metaphysical territory) to enable a new theory to flourish—like how we give infants more leeway when they make mistakes. Further, no theory fits with all the evidence (and evidence, after attacking the notion of observation terms, is all we have). Most selectively ignore evidence that contradict the theory, claiming that the evidence, rather than the theory, is wrong. For a new theory to grow, we also need to give them the chance to dismiss different evidence and explain why the evidence against them should be ignored, hence, again, counterinduction. [5] See fn. 4. [6] Possibly because Galileo was not well-versed in optics, so did not have reservations that Kepler—and other experts—had about the efficacy of the equipment. [7] An interesting discussion comparing the Homeric Greeks with modernity in Appendix 1 of the book demonstrates this perfectly. For Homer—in contrast with the modern assumption of autonomy—a person is seen as a vessel, a medium from which Gods can express their powers (rage, love, jealousy, etc.). There was no word for “body” as a unit, but Homer’s vocabulary is limited to individual body parts—there was not even a general word for “limbs” or “torso”. One can just imagine how the questions the Homeric Greeks believe are worthy to ask differ from what questions we ourselves ask. [8] Feyerabend himself is an exemplar of this. An opera singer & physics prodigy turned philosopher.

25


26

CHEMISTRY

Saving Our Planet: Net-Zero Carbon Emission Bernice Ho

Imagine not being able to go out most of the time due to oppressive heat waves. Would you be annoyed, depressed, or perhaps frustrated? We are going to experience this if we don’t make a change in our behaviour now! Recognising this urgency: over fifty countries, including the world’s largest emitters, China and the United States, have communicated the “net-zero emissions targets,”. On top of that, hundreds more regions, cities and businesses have set targets of their own.

We hope for a better world in the future so that our next generation can have a healthy and sustainable living. This article explains the goal of balancing between the greenhouse gases put into the atmosphere and those taken out as well as the science behind it.


CHEMISTRY

27

What is net zero carbon emissions? ‘Net zero emissions’ refers to achieving an overall balance between greenhouse gas (GHG) emissions produced and taken out of the atmosphere. Think of it like a set of scales: producing GHG emissions tips the scales, and we want to get those scales back into balance with no new GHG being released into the atmosphere in any given year. Eventually, we will probably need to tip them the other way to repair the damages made in the past as well.

In contrast to a gross-zero target that aims to reduce carbon emissions from all sources homogeneously to zero, a netzero emissions target is a lot more realistic because it allows for some residual emissions. These are emissions produced by “hard-to-treat” sectors where emission abatement is exorbitant. These residual emissions are permitted as long as they are offset by gross negative emissions, achieved by removing emissions using natural or engineered sinks.

Here’s the upside of getting to net zero: we can still produce some GHG as long as they are offset by processes that reduce GHG already in the atmosphere (this is called sequestration). For example, these could be things like drawdown technologies such as direct air capture and storage.

Illustration by Bethany Kerr


28

CHEMISTRY

Why is net zero carbon emissions important? Net zero carbon emissions are important as it is the best way to mitigate the negative impact of climate change and global warming. Climate change isn’t a tap that we can turn off once we stop using fossil fuels. GHG like carbon dioxide still stays in the atmosphere and continues to heat up the earth for years after years. Its continued presence is why reducing GHG emissions is hugely important. The end goal is to balance the scales again and restore the global climate to pre-climate change levels. In the Paris Agreement, governments agreed to keep global warming ‘well below’ 2 degrees Celsius and ‘make efforts’ to keep it below 1.5ºC. The Intergovernmental Panel on Climate Change (IPCC) released a report in October 2018 on the 1.5ºC target; it concluded that global carbon emissions should reach net zero around mid-century to give a reasonable chance of limiting warming to 1.5ºC.

Is the net zero carbon emissions goal achievable? Yes, the world can reach net zero carbon emissions by 2050, but we need to make significant changes. In particular, our energy systems will need to be totally transformed. There must be substantial declines in the use of coal, oil and gas. By 2050, almost 90% of the electricity produced globally will need to come from renewable sources, such as solar and wind. Increased spending on electric vehicles, clean energy infrastructure, transforming industrial processes, and more efficient buildings will also be crucial. (More details will be discussed in the “What measures need to be taken?” section). By adopting the above measures, we should be able to achieve net zero carbon emissions by 2050. Experts say this could reduce the chances of extreme heat waves, droughts, and floods by the end of the century. There are implications for food supplies, livelihoods and biodiversity across the world.


CHEMISTRY

29

What is the timeline to reach net zero carbon emissions? Under the Paris Agreement, countries agreed to limit warming well below 2 degrees Celsius (3.6 degrees F), ideally to 1.5 degrees Celsius (2.7 degrees F). Under today’s 1.1 degrees Celsius of warming, some places face the problems of devastating heat waves, melting glaciers, and intense typhoons. This signals the urgency of minimizing temperature increases. The latest science suggests that reaching the Paris Agreement’s temperature goals will require reaching net-zero emissions on the following timeline:

Scenario 1: limiting warming to 1.5 degrees Celsius — carbon dioxide emissions need to reach net zero by 2044-2052, and total GHG emissions must reach net zero by 2063-68; Scenario 2: limiting warming to 2 degrees Celsius — carbon dioxide emissions need to reach net zero by 2070-2085, and total GHG emissions must reach net zero by the end of the century or beyond. .

Figure 1: Timeline to net-zero emission goal https://www.wri.org/insights/net-zero-ghg-emissionsquestions-answered

The Special Report on Global Warming of 1.5 °C, from the Intergovernmental Panel on Climate Change (IPCC), found that if the world reaches net-zero emissions by 2040, the chance of limiting warming to 1.5 degrees Celsius is considerably higher. The sooner emissions peak, the lower they are at that point, and it will be more realistic in achieving the net zero target. This would also create less reliance on carbon removal in the second half of the century.


30

CHEMISTRY

What measures need to be taken? Illustration by Ethan Lan


CHEMISTRY

1

Making the 2020s the decade of massive clean energy expansion - All the technologies needed to achieve the deep cuts in global emissions by 2030 already exist, and the policies that can drive their deployment have already been proven. As the world continues to deal with the impacts of the Covid-19 pandemic, it is extremely important that the resulting wave of investment and spending to support economic recovery is aligned with the net zero pathway. Policies should be strengthened to speed up the deployment of clean and efficient energy technologies. Standards are vital to drive consumer spending and industry investment into the most efficient technologies. Targets and competitive auctions can enable wind and solar energies to accelerate the electricity sector transition. Fossil fuel subsidy phase-outs, carbon pricing, and other market reforms can ensure appropriate price signals. Policies should limit the use of certain fuels and technologies, such as unabated coal-fired power stations, gas boilers, and conventional internal combustion engine vehicles. Governments must lead the planning and incentivising of the massive infrastructure investment, including smart transmission and distribution grids.

Figure 2: Shows what our capacity addition of solar photovoltaics and wind should look like if we stay in the net zero pathway. https://www.iea.org/reports/net-zero-by-2050

Figure 3: Shows what the number of electric car sales in 2030 should look like if we follow the net zero pathway https://www.iea.org/reports/net-zero-by-2050

31


32

CHEMISTRY

2

Use more hydrogen and hydrogen-based fuels to fill the gaps where electricity cannot easily replace fossil fuels and limited sustainable bioenergy supplies cannot cope with. This includes using hydrogen-based fuels for ships and planes, and using hydrogen in heavy industries such as in the steel and chemical industries.

Figure 4: shows the increasing amount of hydrogen fuels in different areas (e.g. buildings, industries, transportation and electricity) by 2050 https://www.iea.org/reports/net-zero-by-2050

3

Business Strategy integration Like any important business initiative, an organization’s decarbonization efforts should be fully integrated into its existing business strategy. Getting started will require climate intelligence on topics such as: • Benchmarking competitive actions and goals • Climate reporting and disclosures • Customers’ and other stakeholders’ expectations • Annual GHG inventory Cumulatively, these pieces of information can help the companies clearly articulate their business case for action, which needs to be supported by emissions data and shared in the language of most stakeholders.


CHEMISTRY Another starting point is conducting a climate risk assessment and disclosing it to investors. The most common climate risk reporting framework is the TCFD framework, which refers to the recommendations set forth by the Task Force for Climate-Related Financial Disclosures on how and what to report as part of climate risk.

33

commitments, high watermark activities of leaders, a crash course in GHG accounting and identification of the organization’s climate risks and opportunities. These are all just examples of what we could do to help improve the environment and stay in the pathway to net zero emissions.

Underpinning all of these efforts is a need for robust climate understanding and stakeholder education. A “climate boot camp” for stakeholders may include topics such as definitions of common climate

Figure 5: Steps to achieving Net Zero Carbon goal https://cundallconversations.com/2020/03/26/step-by-step-how-to-achieve-a-net-zero-building/

Conclusion Climate change is slowly destroying our habitat and our homes. We need to take action now to prevent irreversible damages. Net zero carbon emissions goal is very important in building a sustainable, green world. Lots of countries have started multiple schemes to move in the right direction to achieve this goal. Thanks to the efforts made by governments and scientists, the concept of reducing greenhouse gas emission in the air by burning less fossil fuels, using less private cars etc has been promoted. We must unite to save our planet. Hopefully, by 2050, we should be able to achieve our net zero emission goal.


34

CHEMISTRY Bibliography [1] What Does “Net-Zero Emissions’’ Mean? 8 Common Questions, Answered https://www.wri.org/insights/net-zero-ghgemissions-questions-answered [2] WHAT DOES NET ZERO EMISSIONS MEAN? https://www.climatecouncil.org.au/resources/ what-does-net-zero-emissions-mean/ [3] What does net zero mean? https://www.greenbiz.com/article/what-does-netzero-mean [4] WHAT DOES NET ZERO EMISSIONS MEAN? https://www.climatecouncil.org.au/resources/ what-does-net-zero-emissions-mean/ [5]Net zero: why is it necessary? https://eciu.net/analysis/briefings/net-zero/netzero-why [6]Are net zero emissions by 2050 possible? Yes, says IEA https://www.weforum.org/agenda/2021/05/netzero-emissions-2050-iea/ [7]What Does “Net-Zero Emissions” Mean? 8 Common Questions, Answered https://www.wri.org/insights/net-zero-ghgemissions-questions-answered [8]SPECIAL REPORTGlobal Warming of 1.5 ºC https://www.ipcc.ch/sr15/ [9]Net Zero by 2050 https://www.iea.org/reports/net-zero-by-2050 [10]Net Zero by 2050 https://www.iea.org/reports/net-zero-by-2050 [11]Journey to zero: 4 key action areas to achieve net-zero emissions https://www.greenbiz.com/article/journey-zero-4key-action-areas-achieve-net-zero-emissions [12]Journey to zero: 4 key action areas to achieve net-zero emissions

https://www.greenbiz.com/article/journey-zero-4key-action-areas-achieve-net-zero-emissions


CHEMISTRY

35

Biology


36

BIOLOGY

Epigenetics: Beyond the Genome Hanson Wen

Illustration by Mary Zhang


BIOLOGY

37

Introduction Epigenetics is the study of changes in traits that do not involve the alteration of the DNA sequence, commonly done by changing the amount of expression of certain genes. There are a variety of factors that can cause these changes. Currently, researchers are studying epigenetic changes to the DNA backbone, RNA transcription, histone modifications and prions. These changes can be inherited and affect various physiological and cellular traits. Epigenetics also plays a role in normal fetus development and cell specialisation. For example, in 1996, the world-famous animal cloning experiment successfully cloned a Finnish Dorset sheep named Dolly [1]. In this experiment, the reproductive cell’s nucleus of a male Finnish Dorset sheep was transferred into an unfertilised egg cell of a female Scottish black face sheep. A current was passed through the cell, which was then transferred back into the Scottish sheep to allow embryonic

development and eventually, Dolly the sheep was born. This raised the concern in many people that there would be cloned humans in the near future. However, this is not the case because the cloning of Dolly faced many problems. One of them was caused by epigenetics. [2] When cells divide at the early embryonic stage, there are vital genes that have to be turned on for the fetus to grow properly, but as the organism develops, the specialised cells have these genes turned off by epigenetic tuning. As a result, if the nucleus of a specialised cell is transplanted into another egg cell, there would be no growth as the genes for growth are not functional.


38

BIOLOGY

Mechanism Fundamentals

Chromosome Structure Histones are proteins that hold DNA together. They are found in chromosomes, and they wind DNA around them like beads to form nucleosomes. The histone protects DNA from being damaged and tangled, and the DNA that is wrapped around histone proteins becomes more compact. The histones have a positive charge and DNA has a negative charge; they are held together through electrostatic attractions between the opposite charges. [3]

Figure 1: Diagram of histone protein and DNA in the form of a nucleotide, from evolutionnews.org.

Gene Expression Gene expression is the process of translating genetic information into functional products. This process includes transcription and translation. Transcription is the process by which DNA in the nucleus is copied into RNA, which can then be translated into proteins. RNA polymerase binds to the promoter region, which unwinds and separates the double helix. The polymerase then synthesises an RNA chain called messenger RNA (mRNA) that is transported out of the nucleus through nuclear pores. Translation occurs as the mRNA is translated into proteins. The wandering mRNA finds and binds to a ribosome found in the endoplasmic reticulum. When the mRNA reaches a ribosome, translation begins, and floating tRNA carries amino acids to the ribosome to match the codons (3 nucleotides long) and form a polypeptide (long amino acid chain). This then folds into a functional protein in the cell.


BIOLOGY

39

Epigenetics There are various ways to regulate the amount of gene expression that often occurs at the translational and transcriptional levels. The ways in which our cells regulate the expression of genes and control their characteristics include methylation, acetylation, and RNAi.

Methylation Methylation is the addition of a methyl group to either a histone protein or a cytosine on the DNA. Currently, there are two places where the methyl group can be added to influence gene expression: the cytosine nucleotide and the histone tail.

The methyl group is attached to its targeted position using a big group of enzymes called methyltransferase. Gene methylation can either be repressive or promotive, and all methylations to DNA or histone are reversible: the methyl group can be removed by other enzymes.

Figure 2: How cytosine is methylated, From https://www. researchgate.net/figure/Methylation-of-cytosine-in-carbon-5_ fig1_259879905

DNA Methylation When the methyl group is attached to the cytosine, it can repress the gene. The exact mechanism of blocking the gene currently remains unknown, but the methylated DNA could be inhibiting the attachment of the transcription protein. [4][5] In the promoter region (the small section before the actual gene is transcribed), there are large amounts of cytosine and guanine nucleotides. These are called

CpG islands as they are isolated regions containing large amounts of cytosine and guanine. A study showed that about 70% of promoter regions in the human body contain CpG islands. [6]


40

BIOLOGY Histone Methylation Histones can also be methylated when the methyltransferase attached a methyl group on the histone tail. The methylation of histone can either repress or activate a gene. The way of determining the exact function (activating or repressing) of the methylated histone depends on its location and how the histone is methylated. For example, trimethylation of histone H3 at lysine 4 (H3K4me3) is an active mark for transcription and is upregulated in the hippocampus one hour after contextual fear conditioning in rats. However, dimethylation of histone H3 at lysine 9 (H3K9me2), a signal for transcriptional silencing, is increased after exposure to either the fear conditioning or a novel environment alone. [7]

Acetylation Acetylation is a reversible reaction that adds acetic acid to the tail of a histone protein, carried out by enzymes that work together to perform acetylation and deacetylation. Acetylation neutralises the histone’s positive charge, making it easier for transcription to occur, which is important because DNA is negatively charged, and if the histones have a positive charge, transcription is impossible because the DNA is so tightly packed together.

Figure 3: The structure of the histone and acetylation. From https://theory.labster.com/histone-acetylation/.

Histone modification enzymes correlate with the methylation enzyme of DNA. In an interestingly coordinated process, proteins that bind to methylated DNA also form complexes with the proteins involved in the deacetylation of histones. Therefore, when DNA is methylated, nearby histones are deacetylated, resulting in compounded inhibitory effects on transcription. Likewise, demethylated DNA (which allows transcription) does not attract deacetylating enzymes to the histones, allowing them to remain acetylated and more mobile, thus promoting transcription.


BIOLOGY

41

Because females have two X chromosomes (from mother and father), organisms have to randomly inactivate one of their X chromosomes. This process is called X chromosome inactivation, and it includes using epigenetic mechanisms such as deacetylation and DNA methylation of the X chromosome to prevent transcription. This can be seen in calico cats. The two X chromosomes carry genes that determine the cat’s fur colour. In its embryonic development stage, all cells have one of their X chromosomes inactivated randomly. As the fetus grows, the cells multiply, resulting in large patches of different fur colours.

Figure 4: Calico cats have patterned furs, which correlate with epigenetics. From https:// www.thesprucepets.com/calico-cats-profile-554694

RNAi RNAi is a mechanism that uses RNA molecules to regulate gene expression at translational and transcriptional levels. The regulatory mechanism at translational levels involves two main molecules in the post-transcriptional level of RNAi: microRNA (miRNA) and small interfering RNA (siRNA). [8]


42

BIOLOGY Illustration by Callum Sanders


siRNA

BIOLOGY

siRNA is a piece of RNA that usually has a length of 20-27 base pairs. However, only the single-stranded siRNA can have a silencing effect. siRNA is processed from exogenous (coming from infection by a virus with an RNA genome or laboratory manipulations) long double-stranded RNA (long dsRNA) and cut into shorter RNA strands with a protein called Dicer. The dsRNA remains of the cut are called the siRNA, which currently remains double-stranded (cut from a dsRNA). The siRNA is typically 20-27 nucleotides long. The siRNA then binds to a protein complex called RISC (RNA Induced Silencing Complex), which is a combination of proteins that can perform functions such as cleaving a nucleotide strand. Because the siRNA is still double-stranded, we have to unwind the double-stranded siRNA into a single strand. The active strand, the one that remains bounded

43

with the RISC, is fully functional while the other strand, called the passenger strand, is degraded by degradation enzymes. This is when the RISC protein + siRNA molecule is fully functional. The purpose of the RISC is to perform cleavage on the strand of nucleic acid that the bound siRNA is complementary to. The RISC which is bonded with siRNA floats around in the cell’s cytoplasm to find a perfectly complementary mRNA. When the siRNA-RISC molecule finds the complementary mRNA, the RISC cuts the mRNA into two parts. The broken edge can be recognised by nucleic acid degradation enzymes. The molecule can be used repeatedly because the siRNA that is bound to the RISC is not cut in half.

miRNA miRNA has a similar structure to siRNA but has a shorter length, and it is only a single strand. miRNA comes from the nucleus. When the RNA polymerase protein is transcribing the genes, the RNA synthesised

folds into itself and forms a primary microRNA. This is the predecessor of the precursor miRNA. The primary microRNA has a structure similar to a stem-loop:

Figure 5: miRNA scaffold, stem loop RNA, and dsRNA structure. Created in BioRender by the writer.


44

BIOLOGY After transcription, the primary miRNA is processed by DGCR8/DROSHA protein to form a precursor microRNA. The DGCR8/DROSHA complex cleaves the miRNA scaffold (Figure 3). This would turn the primary miRNA into precursor miRNA. Now, the precursor miRNA has a stem-loop RNA structure. The precursor microRNA is exported into the cytoplasm from the nucleus, where it is bound and cleaved with a dicer protein. The dicer protein cleaves the turning tip of the stem-loop RNA to form a short dsRNA (double-stranded RNA), which is about 22 nucleotides. This short dsRNA then goes through the same process as the formation of siRNA. The short dsRNA binds to RISC, which then binds to a complementary mRNA to stop translation of the ribosome. There are 2 ways miRNA can inhibit translation. It can either cleave the mRNA (like siRNA) and allow degradation of mRNA, or it can remain bound to the mRNA, so when a ribosome

attempts to translate it, it will just block its way. Because miRNA is shorter in length, there will be a wider variety of mRNA that the miRNA can inhibit because there are more complementary mRNA strands. In 1990, there was a paper published on a genus of flowers, Petunia. [9] Scientists attempted to introduce a gene to produce a purple pigment, which was supposed to colour the flower purple. Surprisingly, however, the gene turned the flower white. This is because the cells were using the newly introduced genes to produce microRNA, which is complementary to the original pigment-producing mRNA. This created inhibitory effects on the original pigment-producing proteins, which resulted in no pigment being produced, thus turning the flower white.


BIOLOGY

45

Applications Gene Validation RNAi can be used for research in life sciences. [10] We could use the inhibitory effects to validate certain gene functions and what certain expressed proteins can do to the cell. For example, if we wanted to investigate the effect of the gene BRCA1, we would design a dsRNA based on the sequence of a gene and inject it into the cell. The Dicer would cleave the RNA, and RISC would then inhibit any messenger RNA complementary to the siRNA

produced (not miRNA because miRNA can only be produced endogenously). This would inhibit the expression of BRCA1 on a translational level. We could study the purpose of this gene by observing the changes that would happen to the cell. For example, if the cell functions normally, we would know that this gene might not be important in the cell.

Medicine Methylation Drugs Epigenetic traits can affect your health because the genes of a fetus can be turned on and off during pregnancy. For example, the Dutch famine in 1944-1945 was caused by the German blockade of food in which 4.5 million people were affected. Scientists discovered that people whose mothers were pregnant during this famine were more likely to develop symptoms such as type 2 diabetes, heart disease and schizophrenia. These people had different methylated levels of genes at certain places to those of their siblings (whose mothers were not pregnant during the famine). These differences in methylation could help explain why these people had an increased likelihood of certain diseases later in life. The methylation could be a mechanism to protect a child’s growth after birth during a famine, as these diseases

are linked with preserving energy and fat. However, some methylation might be bad for us. For example, the BRCA1 gene is vital in suppressing cancer. Any mutation that suppresses the gene may increase the risk of exposure to breast and many other types of cancer. However, there are drugs like azacitidine and decitabine that can lower methylation levels. These are chemotherapy drugs used to treat cancer, which work by combining with the DNA transferase to inhibit methylation, lowering the methylation level of DNA in general. The demethylation might cause side effects, such as hair loss (typical side effects of chemotherapy and radiotherapy). [11]


46

BIOLOGY

RNAi Drugs Some viral diseases (RT virus) may infect an organism using reverse transcription, which can integrate DNA into a host’s cell body. The host cell would use that piece of the genetic material of the virus to produce more viruses. We could use RT viruses in this way to incorporate certain genes into a host organism, which can inhibit the expression of other specific genes. For example, Alnyalm Pharmaceuticals have been designing therapies that shut down the expression of two genes that promote liver cancer. This is comparatively safer than chemotherapy and can be used to develop treatments for more uncommon genetic disorders. [12]

Conclusion Epigenetics is still an innovative branch of genetics that awaits to be explored. It has great potential for the development of biotechnology in the future, such as speeding up the exploration of new genes with RNAi gene validation and disease treatments. Further research and human clinical trials is still necessary, but in the future, I believe that epigenetic methods will be widely used for treatments.


BIOLOGY Bibliography [1] Niemann, Heiner. “Epigenetic Reprogramming in Embryonic and Foetal Development upon Somatic Cell Nuclear Transfer Cloning in: Reproduction Volume 135 Issue 2 (2008).” Rep, 1 Feb. 2008, https://rep.bioscientifica.com/view/ journals/rep/135/2/151.xml. [2] Staropoli, Nicholas. “Epigenetics Around the Web: Dolly the Sheep and Aging. Epigenetics Is Not Genetics. Obstacles to Gene Editing. - Genetic Literacy Project.” Genetic Literacy Project, https:// www.facebook.com/GeneticLiteracyProject, 27 Feb. 2017, https://geneticliteracyproject. org/2017/02/27/epigenetics-around-web-dollysheep-aging-epigenetics-not-genetics-obstaclesgene-editing/. [3] “Histone H2A Variants H2AX and H2AZ ScienceDirect.” ScienceDirect.Com | Science, Health and Medical Journals, Full Text Articles and Books., https://www.sciencedirect.com/science/ article/abs/pii/S0959437X02002824?via%3Dihub. Accessed 1 Nov. 2021. [4] “DNA Methylation and Its Basic Function | Neuropsychopharmacology.” Neuropsychopharmacology, Springer Nature, https://www.nature.com/articles/npp2012112. Accessed 1 Nov. 2021. [5] Phillips, Theresa. “The Role of Methylation in Gene Expression | Learn Science at Scitable.” Nature, https://www.nature.com/scitable/ topicpage/the-role-of-methylation-in-geneexpression-1070/. Accessed 1 Nov. 2021. [6] Saxonov S, Berg P, Brutlag DL (2006). “A genome-wide analysis of CpG dinucleotides in the human genome distinguishes two distinct classes of promoters”. Proc. Natl. Acad. Sci. U.S.A. 103 (5): 1412–7. Bibcode:2006PNAS..103.1412S. doi:10.1073/pnas.0510310103. PMC 1345710. PMID 16432200. [7] Gupta, Swati; Se Y. Kim; Sonja Artis; David L. Molfese; Armin Schumacher; J. David Sweatt; Richard E. Paylor; Farah D. Lubin (10 March

2010). “Histone Methylation Regulates Memory Formation”. The Journal of Neuroscience. 30 (10): 3589–3599. doi:10.1523/JNEUROSCI.3732-09.2010. PMC 2859898. PMID 20219993. Greer, Eric L.; Shi, Yang (2012). “Histone Methylation: A Dynamic Mark in Health, Disease and Inheritance”. Nature Reviews Genetics. 13 (5): 343–57. [8] “RNA Interference: The Molecular Immune System | SpringerLink.” Journal of Molecular Histology, https://link.springer.com/ article/10.1007%2Fs10735-004-2192-8. Accessed 1 Nov. 2021. [9] “A New Petunia Flower Colour Generated by Transformation of a Mutant with a Maize Gene | Nature.” Nature, Springer Nature, https://www. nature.com/articles/330677a0. Accessed 1 Nov. 2021. [10] “RNA Interference: Concept to Reality in Crop Improvement | SpringerLink.” Planta, https://link.springer.com/ article/10.1007%2Fs00425-013-2019-5. Accessed 1 Nov. 2021. [11] “DNA Methylation as a Therapeutic Target in Cancer | Clinical Cancer Research.” Clinical Cancer Research, https://clincancerres. aacrjournals.org/content/13/6/1634.short. Accessed 1 Nov. 2021. [12] Bumcrot, David, et al. “RNAi Therapeutics: A Potential New Class of Pharmaceutical Drugs.” Nature Chemical Biology, no. 12, Springer Science and Business Media LLC, Dec. 2006, pp. 711–19. Crossref, doi:10.1038/nchembio839.

47


us

tra

tio

n

by M ya

Ip

BIOLOGY

Ill

48

mRNA Vaccine Possible Cure For Cancer Kevin Wu


BIOLOGY

Introduction More than 600 million mRNA covid vaccine doses were administered by the 8th of October, 2021. This was the first time the mRNA vaccine was used on such a large scale, marking it as a grand milestone for mRNA research. This uncovered many prospects of mRNA vaccines in various fields of medicine, such as the treatment of AIDs and cancer. But have you ever wondered what exactly mRNA is and how mRNA vaccines differ from traditional vaccines?

What is mRNA? Messenger ribonucleic acid, commonly known as mRNA, is a single-stranded RNA molecule that is produced during the process of transcription and used in translation for protein synthesis. During transcription, the helicase enzyme unwinds the two strands of DNA by breaking the weak hydrogen bonds between the complementary base pairs. The exposed gene allows the enzyme RNA polymerase to move along the DNA template strand and read the bases one by one to make an mRNA strand, a complementary copy to the DNA strand. Unlike DNA, mRNA is free to leave the nucleus through a pore in the nuclear envelope, which controls the passage of molecules between the nucleus and cytoplasm. mRNA then goes through a process of splicing where introns are snipped out of an mRNA transcript by spliceosomes, leaving an exon and an mRNA that’s smaller with higher mobility. The mRNA molecule then undergoes translation when it attaches to a ribosome in the cytoplasm. In the cytoplasm, tRNA binds with a specific amino acid according to its triplet of anticodon and attaches to the complementary codon on the mRNA molecule. When there are multiple tRNAs along the mRNA strand, amino acids join together by peptide bonds and form a protein.

49


50

BIOLOGY

The mRNA Vaccine Unlike traditional vaccines that use dead or inactive pathogens to trigger an immune response leading to the production of antibodies, mRNA vaccines use the genetic information of a virus to code for the viral protein. These genetic instructions allow the production of small parts of the virus protein through protein synthesis in the muscle cells at the injection site. An mRNA vaccine that mimics the SAR-Cov-2 genome would allow our immune system to get a preview of what the virus looked like, triggering a primary immune response to give our immune system time to produce antibodies to neutralise and attack the virus more quickly upon secondary exposure.


BIOLOGY

Illustration by Lutina Kwok

51


52

BIOLOGY

Problems faced

Problems faced mRNA technology had three main challenges: its stability, reactivity, and the method of delivery to the cell. For the translation of a nucleic acid to express the pathogen’s antigen, the nucleic acid must enter the host cells. However, mRNA was found to be too vulnerable to survive in the body long enough for the spike proteins to form and trigger an immune response. When they tried putting the mRNA into a cell, the body’s immune system detected the antigen of the cell containing mRNA as foreign and destroyed it. This problem was solved by Drs. Kariko and Weissman. By replacing the nucleoside uridine with pseudouridine, the mRNA was now more stable due to the formation of an additional hydrogen bond. Furthermore, by packaging mRNA with a lipid cover known as a liposome, mRNA was protected

from being degraded by ubiquitous RNases, allowing it to successfully avoid attracting attention from the white blood cells. However, there are still many problems faced by the mRNA vaccine. Since mRNA is unstable, the vaccine requires a cold chain for storage, meaning that low-income countries without the ability for storage at such a low temperature are unable to provide the vaccine to their citizens. The fact that mRNA is still a fairly new form of vaccine also means that scientists are unsure of how it would perform in terms of long-term safety and efficacy.


BIOLOGY

Why mRNA Vaccines? There are numerous advantages of using mRNA vaccines rather than traditional vaccines. For infectious diseases like influenza, there are several strains with different mutations every year. Therefore, scientists would have to predict which strain to target 6 months earlier as designing and manufacturing traditional vaccines can be a very time-consuming process. As a result, the efficacy of the influenza vaccine is limited to only around 50%-70%. Unlike traditional vaccines, mRNA vaccines can be manufactured quickly and in large quantities to target a range of virus subtypes. They are also very flexible, so if there is a need for changes in the antigens of viral proteins encoded by the mRNA in the vaccine, it can be done faster. This means an increase in the efficacy rate as there is more time for scientists to simultaneously detect new strains and mutations of influenza. Another advantage of mRNA vaccines is that it has the potential to treat and cure many diseases that we are incapable of dealing with. Due to HIV’s ability to mutate rapidly and to elude our immune system, scientists have failed to produce vaccines against HIV. But that may no longer be the case. According to IAVI, an mRNA-based HIV vaccine was able to induce the correct immune response on 97% of the patients in early clinical trials. The same goes for Zika diseases: FDA has recently granted Moderna Fast track designation in the investigation on mRNA Zika vaccine, signifying that diseases that were once untreatable may cease to be so.

Cancer Treatments According to the CDC, in 2019, there were 599,601 cancer deaths in the US alone. Traditional cancer treatments mainly involve surgery, chemotherapy and radiotherapy. However, despite having multiple types of treatments to choose from, treatments like surgery are restricted to only certain types and stages of cancer whereas other treatments like chemotherapy and radiotherapy can only shrink the cancer and slow down its growth rather than cure it completely. Furthermore, these treatments can result in detrimental side effects such as intense aches, nausea, hair loss and fatigue, exacerbating the quality of life for cancer patients. After all, despite many advances in technology and new types of treatments, many still see cancer as a disease that equates to death.

53


54

BIOLOGY

A Possible Cure for Cancer?


55

BIOLOGY Since every cancer mutation is significantly different from others, every cancer cell has a unique protein (antigen) on its surface. This means that traditional vaccines, which take a long time to design and manufacture, will not be as effective on every single patient. This is one of the advantages of mRNA vaccines. mRNA vaccines are much easier and quicker to manufacture in comparison to traditional vaccines. As a result, scientists believe that mRNA vaccine therapies can be personalised and programmed specifically for a patient’s cancer cells to stimulate a primary immune response and produce antibodies. Doctors and scientists hope that this will prepare our immune system to recognise specific cancer cells and kill them. Many biotechnology companies like Moderna and BioNTech have started their research on mRNA vaccines for cancer. Moderna is currently developing mRNA therapies that train the immune system to recognise certain cancerous proteins like KRAS, which is involved in more than 20% of cancers. The results from the Phase I trial have already shown positive results that mRNA treatments combined with anticancer therapies were considerably more effective in treating skin and neck cancer than the standard treatment. However, mRNA vaccines might not be as effective for all types of cancer. Moderna’s Phase I trial on colorectal cancer has shown negative results; this is disappointing news for many, and it signifies that mRNA vaccines might not be a panacea as we hoped they would be.

Conclusion In conclusion, we might not be there yet in terms of our ability to treat all diseases with mRNA technology, but we should not be discouraged. mRNA technology is still a recent and new innovation with great potential for improvement and refinement. Therefore, I am certain that mRNA technology will progressively become safer and more effective and will turn out to be one of the most significant innovations in the field of medicine.


56

Bibliography

BIOLOGY

Media, J., 2021. mRNA Technology and Its Significant Potential to Improve Human Health. St Joseph Media. ‘DISEASE Medicines new era: The use of mRNA vaccines to tackle Covid-19 could mark the start of a new era of medicine’ (2021) Geographical, 93(6) Casadevall, Arturo.,2021. The mRNA vaccine revolution is the dividend from decades of basic science research. American Society of Clinical investigation. Ridley, Matt.,2020. Hot shots: why mRNA vaccines would revolutionise medicine. The spectator Ltd (UK) ‘Moderna Receives FDA Fast Track Designation for Zika Vaccine mRNA-1893’ (2019) Entertainment Close-up, 29 Aug,

Scientists Tout mRNA Vaccine Technology to Cure Cancer and HIV’ (2021) Nigeria Communications Week, 30 Jun Bailey, R. (2021) ‘BEYOND COVID: MORE USES FOR NEW MRNA TECHNOLOGY ‘, Reason, 53(1),


BIOLOGY

Physics and Technology

57


58

PHYSICS AND TECHNOLOGY

Biomimicry: The Predestination of Innovation Emily Tse It is undeniable that the human race has thrived since the dawn of the 21st Century with the crucial inventions of 3D printing, augmented reality, capsule endoscopy and numerous other innovations. All of these incredible technological advancements have overwhelmed us with their intricate and judicious design, leading many to envision a bright future for humanity — a world full of metropolitan cities where people travel through magnetic portals, where AI-enabled Human Robots make up the workforce of the tertiary sector, where all energy can solely depend on renewable sources. Although these prognostications may very well become the reality of our distant future, they do not eradicate the dire global problems that originated from the establishment of humankind on Earth (e.g. climate change) owing to the destructive degradation of the environment. If solutions are not proposed soon, there may not even be a future to look forward to. However, the key to the future is astonishingly found at the Blue Lagoon beach, in the Amazon Rainforest and the Sahara Desert. Mother Nature is the answer. There is a false notion that any ingenious creation is artificial or man-made when in reality, it is intrinsic on the surface of our planet, which many see as primitive and forgotten. This is where biomimicry comes in.

What is biomimicry? Biomimicry (or biomimetics) is the technological-oriented practice of examining nature (its designs, system, processes) to emulate or take inspiration in order to solve human problems in a regenerative way. The term was coined by Otto Schmitt - an American academic and inventor - and later popularised by Janine Benyus in her book Biomimicry: Innovation Inspired by Nature (1997). The word biomimetics is derived from the Greek words ‘bio’ meaning life and ‘mimesis’ meaning to imitate. [1]

Illustration by Estelle Chan


PHYSICS AND TECHNOLOGY

The three key components of biomimicry

59

There are three important elements to consider wherever we are transcribing the methods into innovations. Biomimicry materialises when all three components are applied and intertwined.

1. Emulate

Emulates represent the “mimicry” in “biomimicry” in which nature’s forms, processes, principles, patterns and ecosystems are reproduced in more sustainable designs through scientific research. This is the physical act of biomimicry (innovation and design) and is often associated with “doing biomimicry”.

2. Ethos The Greek word ethos means “character” and was first used by Aristotle, who referred to the balance between passion and caution in someone’s personality. In the context of biomimicry, it is the “philosophy of understanding how life works and creating designs that continuously support and create conditions conducive to life”[1]. Ethos forms the quintessence of our ethics and intentions when practising biomimicry; it symbolises our respect, responsibility and gratitude towards the species that live among us and our planet Earth. This is the point of intersection between biomimicry and sustainable design as the ethos element is an elaboration from the emulation of nature facilitating innovations.

3. (Re)Connect

This notion stresses the prevalent drawback of society as they fall into the societal practice of disparaging the beauty of nature. It is common to perceive humans and nature as two “separate” entities due to the dichotomy between the two, but the element of (re)connect educates others on the understanding that there is a deep interconnection between people and the environment. Only when our mindset has changed for biomimicry and biophilia to converge through observing and experiencing nature can we have a greater ethos to emulate biological strategies in our endeavour to rectify international issues.

Nature as a mentor

“Biomimicry is about valuing nature for what we can learn, not what we can extract, harvest, or domesticate. In the process, we learn about ourselves, our purpose, and our connection to each and our home” (Source: The Biomimicry Institute: What is biomimicry? [1]) As highlighted by the Biomimicry Institute, biomimicry allows us to finally find our place in the giant ecosystem that is Earth. As we become the apprentices of nature, we can cultivate the next generation of innovators that will guide our society to its glory. The aim of biomimicry is to produce products, processes, and policies that resolve the challenges put forward by our exceptional design in a sustainable,


60

PHYSICS AND TECHNOLOGY empathetic way in which unanimity is reached for all living organisms on this planet. Through practising biomimicry, we are gifted with the advantages of nature’s wisdom. An advantage that biomimicry offers is assistance in relieving the stress we experience and the stress we impose on our planet as we continue to expand our civilisation, which leads to the prominent global issue of climate change. From current media coverage, we have already seen the impact of climate change, indicating that climate change is not an imminent catastrophe waiting to happen but a present circumstance that many are already experiencing. For example, the prolonged Australia bushfire incident that marked the start of 2020 resulted from the ongoing drought, heatwaves, wind speed, relative humidity that were exacerbated by the impact of climate change or the unprecedented weather phenomenon residents of Texas are facing due to the arrival of Winter Storm Uri. The irreversibility of the climate crisis and its many devastating effects on ecosystems across the world are

causing people to lose hope. However, biomimicry gives us hope; our Earth is home to the solutions for our greatest challenges, accessible by humans and substantiated by the countless species still alive today. If we allow nature to become our mentor, we can experience significant healing effects through the connection to Mother Nature whilst uncovering needed relief for these problems together. To add on, the act of altruism is promoted through biomimicry as it encourages us to design on behalf of humanity and the natural. Circularity, sustainability, and regeneration are characteristic of biomimetic designs as the creation humans make can transition to restore vegetation, water, air, soil etc. rather than degrading them for our own expense. The “great” man-made inventions utilise brute force to extract and exploit organic materials that nature has provided: mining fossil fuels, metal, rocks for commercial purposes whilst releasing a substantial amount of toxic chemicals in the process. As the world population continues to increase rapidly, our need for land, resources, and water rises dangerously. An estimated


PHYSICS AND TECHNOLOGY

55 billion tonnes of fossil fuels, minerals, metals and biomass are extracted from the ground annually along with the loss of 80% of the world’s forest as they continue to disappear at an alarming rate of 375 km2 per day, destroying the homes of more than 1,000 species of animals. Therefore, it is imperative that we follow nature’s method of creating conditions conducive to life by using structure to change functions and transit to passive forms of energy.

Furthermore, the urgent situation that our planet faces needs to be dealt with quickly as problems such as global warming, climate change, and water scarcity will continue to progress regardless of whether or not humanity is prepared to manage them. The quaternary sector of research and development is currently set up in some high-income countries such as the United States, Japan, Germany to pioneer solutions to meet the UN’s 17 sustainable goals: affordable and clean energy, sustainable cities and communities, responsible consumption and production, climate action and more. Despite the high global spending equivalent to 13.2 trillion HKD with 80% of spending accounted for by 10 countries, R&D cycles are slow owing to the complexity and time-consuming nature of the progress. Instead, we should observe and learn from the biological blueprints that have been successful for millennia to catalyse trailblazing innovations. There is no need for us to devise or reinvent pre-existing strategies when we can simply adopt them from nature.

61


62

PHYSICS AND TECHNOLOGY

Illustration by Bethany Kerr On top of all of this, the most important benefit is that biomimetics bring us the offering of a new perspective of the natural world. In the last decade, urbanisation has risen as the population has declined in many rural areas - rural-urban migration has led to 55% of the world’s population living in urban areas (2018) and is expected to increase by 23.6% to 68% by 2050. The gradual shift of residence from the countryside to the cities suggests that more and more people are losing their connection with Mother Nature as they embark on a metropolitan lifestyle, forgetting their “real” home. Biomimicry has given innovators the hope of achieving a sustainable product that is efficient and effective. Not only does this teach us to appreciate and reconnect to the natural world through experiencing different viewpoints in life, but it also demonstrates the vast wisdom that nature holds and hence promotes the conservation of ecosystems and their inhabitants.


PHYSICS AND TECHNOLOGY

63

Application of biomimicry

1. Engineering

One of the most monumental inventions by the Japanese is the 500 series Shinkansen bullet train — one of the world’s fastest trains, travelling at the top speed of 320 km/h, which has never had a fatal crash or derailment in its 55-year history and is therefore proven to be the safest, most convenient transportation in Japan. The famous Shinkansen bullet train is named for its streamlined forefront streamlined shape and along with other structural adaptations. However, before the great success, the team of engineers at the West Japan Railway Company were presented with a problem — the noise. [2] Due to the density of the residential settlement near the train track, in 1975, the Japan Environment Agency set very strict noise standards for the Shinkansen Line: a maximum of 70 dB (A) and 75(dB(A) for two zones. Although the group of engineers managed to design a train that approached the speed of 350 km/h (later named the WIN350), the problem arose due to the positive correlation between speed and level of noise. There were 3 main sources of noise[4]: • At a higher speed, ground vibration is generated throughout the train and supporting structures to the ground (noise was generated at twice the velocity of the train) • The train body and the train’s pan-

tographs that connected the overhead catenary wires with the train (at a velocity above 200 km/h, the aerodynamic noise was 6th to 8th power of the train’s velocity) • After the train passed a tunnel at high speed, a sonic boom was created, which is a prominent issue as half the track of the 554 km Sanyo Shinkansen Line was in tunnels The reason behind the pantograph noise was that as air passed over the linkage and structural components e.g. struts in the device, it formed Karman vortices (or a Karman vortex street). It was aerodynamic turbulence that caused most of the noise. This phenomenon occurs not only on a small scale but also at all scales, ranging from a train’s aerial to patterns found on a beach. This is because it was caused whenever a body disturbed the flow of a fluid, separating into alternate and opposite eddies swirling along the line of the interference and swinging back and forth as the forces alternately dominated each other (as shown in Figure 1).

Figure 1 - A visual diagram of the Karmen Vortex in the tunnel that created a high level of noise (Source: zq02 summer 2021 edition by Tom McKeag, image courtesy of Nakatsu, Eiji)


64

PHYSICS AND TECHNOLOGY At first, the JR West team approached the problem by decreasing the number of pantographs from eight to two, creating wind shielding covers and designing a new shape of the pantograph to reduce any fluid obstruction and thus the formation of large vortices. However, the renewed pantograph did not resolve the issue. As the new model was significantly heavier, this not only added additional weight to the train, energy and the engineering of the train tracks, but it also aggravated the ground vibration through the train body along with the supplemental pressure wave when entering a tunnel. This is when Mr Nakatsu, a man with the hobby of bird watching, noticed something crucial in bird flight during a Wild Bird Society lecture held by Mr Seichi Yakima about the physiology and anatomy of birds in aircraft design. Despite travelling at 64 km/h, being nocturnal predators, owls have several noise-dampening forms in their feathers that provide the ability of ‘silent flight’. This is enabled by the flutings or fimbriae composed of “comb-

like” serrations along the leading edge of the primary wing feathers, which break incoming air over the wind foil into micro-turbulence and reduce the level of sound emitted. Mr Nakatsu and his team analysed the wings of an owl specimen and emulated the curvature and serrations that form the “wing-like” component of the new prototype. (This is shown by the blue section in Figure 2.) In place of the previous double-scissors structure, they made two streamlined elements, which were the pantograph slider and a vertical aluminium pillar in 1994. [4]

Figure 2 - A design comparison between the current pantograph in 1975 and the newly invented pantograph emulating the wing of an owl (Source: zq02 summer 2021 edition by Tom McKeag, photo courtesy of Nakatsu, Eiji)

Moving on, they had to tackle the sonic boom problem. Each time the train entered the tunnel at high velocity, low-frequency (under 20 hertz) atmospheric pressure waves were generated and propagated the tunnel at the speed equivalent to a plunger pushing into a syringe - essentially, the train was forcing air out the end of the tunnel.[8] The pressure discharged at the tunnel end was only 0.001 atmospheric pressure due to the reflection of compression waves, creating a pressure variation, the air transmitting in low-frequency waves produced a large sonic boom and aerodynamic vibrations. This is known as a Tunnel Micro Pressure Wave (as seen in Figure 3). Residents within a radius of 400 metres of the tunnel exit were affected, which pushed the engineers to find a solution. The tunnel problem was more complex than the pantograph issue due to the connection between both the geometry of the tunnel and the speed of the train - per unit increase in velocity, pressure triples - as well as the proportional relationship between the micro pressure of wave and the cross-section of the trainset compared to the tunnel. Since it would be extremely difficult and expensive to rebuild the 64 m2 tunnel, the approach was to reduce the cross-sectional area of the train and redesign its nose to prevent the building up of pressure waves.


PHYSICS AND TECHNOLOGY Once again Mr Nakatsu was able to look for inspiration in nature after a junior engineer pointed out that the test train seemed to “shrink” when entering the tunnel, suggesting that there was a sudden change in air resistance from open-air to closed air tunnel.

Figure 3 - A visual diagram of the Tunnel Micro Pressure Generated Mechanism which created the sonic boom (Source: zq02 summer 2021 edition by Tom McKeag, image courtesy of Nakatsu Eiji)

Mr Nakatsu then realised that it was the same situation as kingfishers moving quickly from the air (a low resistance medium) to water (a high resistance medium, around 800 times denser) without creating any splash to keep its prey in their sight and avoid startling the fish. This was achieved by pushing water in front of the bill so the water would be allowed to flow past the beak. Since the train faced a similar challenge in moving from low drag open air to high drag air in the tunnel, the team decided to recreate the forefront of the Shinkansen train based on the beak of the kingfisher. The shape of the kingfisher’s bill provided an almost ideal shape for this: the beak was streamlined, steadily increasing in diameter from its tip to its head in a rotational

parabolic shape - the same shape formed in the diamond interstices of four perfect circles intersecting with each other. After obtaining a natural specimen of a kingfisher and analysing its dimension and materials, they recreated numerous bullets in the shape of different train nose models and tested in a pipe scaled to the model tunnel, and naturally, the bullet in the form of the kingfisher’s beak generated the lowest pressure waves. Finally, on March 22, 1997, JR-West’s 500-series Shinkansen bullet train was put into the commercial service, travelling 10% faster whilst remaining below the noise limit of 70 dB. The air resistance reduction also resulted in the secondary benefits of reducing energy consumption (15% less) and cost.

Figure 4 - The difference in drag between the two shapes. Shape A is similar to the original bullet train forefront shape and shape B resembling the beak of a kingfisher (Source: Biomimicry with 3D printing of shapes and surfaces by Max Murphy)

65


66

PHYSICS AND TECHNOLOGY 2. Healthcare

Although there is a huge emphasis on the maintenance of hygiene and sanitation in hospitals, there is also a rising issue of healthcare-associated infections (HAIs), which refers to infections obtained while receiving health care for other conditions. Examples of this include bloodstream infections (BSI), pneumonia, urinary tract infections (UTI) and surgical site infections (SSI) such as sepsis. According to the World Health Organisation (WHO), in every 100 hospitalised patients at any given time, 7 patients in developed and 10 in developing countries will acquire at least one healthcare infection, either leading to significant mortality rates or causing substantial financial losses for the healthcare system. Even in high-income countries, approximately 30% of patients in intensive care units (ICUs) are affected by a minimum of one HAI in spite of the extreme measures taken to keep the area as sterilised as possible for critically ill patients. This was a discernable medical vexation until a solution was accidentally discovered in 2007 by the establishment of Sharklet Technologies. In 2002, Dr Anthony Brennan, the founder of Sharklet, was a materials science and engineering professor at the University of Florida. He became the pioneer of the new material called Sharklet due to his visit to the US naval base at Pearl Harbor as part of navy-sponsored research. The US office of naval research requested Dr Brennan to develop a new strategy to discourage the growth of barnacles on ship hulls to stop the use of toxic antifouling paints, as well as cutting down costs relating to dry dock and drag. He then found that Galapagos sharks, regardless of their slow movement in the water, did not suffer from the problems of being covered in algae, barnacles, and fouling. Using scanning electron microscopy, Dr Brennan was able to observe and examine the dermal denticles of the sharkskin, which confirmed his theory that the sharkskin denticles are

arranged in a specific diamond pattern with microscopic riblets that minimises drag. The first Sharklet was made after Dr Brennan’s evaluation of the sharks’ ability to inhibit the growth of microorganisms was proven to be antibacterial. This led to it being commercially rolled out onto the surface coverings of hospital surfaces such as countertops, bathroom sinks, computer monitors, etc, reducing the growth of microbes by an astonishing 80% and notably providing a strong advantage over drug-resistant bacteria such as MRSA and C. diff. The Sharklet’s surface is composed of millions of microscopic features organised in a distinct, uniform diamond pattern (as shown in Figure 5), which Dr Brennan had measured the width-to-height ratio of (3 microns tall and 2 microns wide). This then corresponded to his mathematical model for roughness that would discourage microorganisms from attaching, colonising and forming biofilms without the use of any toxic additives, chemicals, antibiotics or antimicrobials. Depending on its applications, the Sharklet may have positive patterns protruding from the surface, inverse patterns recessing into the surface of the material or altering the dimensions in the pattern. According to the principle of nature, organisms seek the path of least energy resistance - as a result, pathogens take root singly or in small groups with the intent to establish large colonies. The Sharklet takes advantage of this: by preventing the formation of biofilms, it would require too much energy for microorganisms to colonise. Consequently, the pathogens then move to find another place to expand their population - and since they are unable to signal each other, this eventually leads to the death of the pathogens as they are inherently programmed to die (which is referred to as programmed cell death, or PCD).


PHYSICS AND TECHNOLOGY All healthcare facilities try to defend against dangerous infections through personal maintenance (e.g. washing hands regularly), the use of antibiotics, and disinfecting chemicals. These common precautions taken to combat pathogens are not always effective nor beneficial as the overuse of antibiotics creates a new generation of resistant microorganisms. To add on, the chemicals used not only have high economic costs but are also environmentally unfriendly to manufacture and may be harmful to humans such as causing skin irritations and respiratory problems. Rather than spending tens of millions of dollars on drug research to combat HAI, the answer lies within the micropattern created by evolution: By emulating the structural design into the practical Sharklet material, millions of lives can be saved.

Figure 5 - Side-to-side images of the microscopic sharkskin denticles and the micropattern created by the adoption of it by Sharklet Technologies. (Source: Vox video “The world is poorly designed. But copying nature helps’’)

67


68

PHYSICS AND TECHNOLOGY

Evaluation

In conclusion, biomimicry is highly effective in enabling humans to create innovative technologies whilst also carrying out the process in an inherently sustainable way. Simply by introducing a biologist or biomimetic expert into the team of engineers, industrial and domestic life would be made easier through the imitation of nature’s form, process, ecosystems. Since the idea behind biomimicry is to utilise the biological blueprint or template provided by Mother Nature without harming any organisms nor causing any environmental degradation, Janine Benyus frequently emphasised the eco-friendly nature of biomimicry. Unfortunately, this is not always the case. Some practitioners of biomimicry in the business industry usually prioritise the production of novel technologies and maximum profit over the aim of sustainable development. For example, the most recent biomimetic research projects were designing undetectable surveillance cameras based on insects’ compound eyes, creating fixed-wing, wind-propelled naval drones based on the flight of albatross, emulating DNA to create industrial nanites, etc. Projects such as these rarely showed devotion to eco-friendly initiatives or pronounced sustainability credentials, and they could even be contributing to the ecological footprint and exacerbating the risk of environmental degradation. Owing to large corporations in the defence industry investing in these projects, it can be inferred that biomimicry is being exploited as a profitable way to take advantage of natural strategies, and this is capable of causing environmental harm

instead of promoting eco-friendliness. However, biomimicry is an intellectual movement in design and innovation, offering humanity a way to transform. Inventions by companies such as Calera which manufacture building blocks using CO2 - or Aquaporin - which created a filtering unit for water to desalinate seawater using less electricity - have proven that biomimicry has the potential to resolve many of our current global crises, including water shortage and global warming, in a sustainable manner. This can be encouraged by the spreading of ecocentric traditions, policies and the introduction of ‘ecomimicry’, which is the emulation of nature for eco-friendly design. Ecomimicry serves as a categorisation system that allows us to identify socially or environmentally insensitive practices of replicating nature and recognise practices of emulating nature that aim to be environmentally responsible and socially just, such as encouraging decentralisation and localism, a democracy of decisionmaking over technological change, and the acknowledgement of the need to distribute power rather concentrate it.


PHYSICS AND TECHNOLOGY

Conclusion

As Janine Benyus, the co-founder of Biomimicry Institute said, “When we look at what is truly sustainable, the only real model that has worked over long periods of time is the natural world.” The Earth came into existence and survived for 4.543 billion years, with the beginning of the lives of more than 30 million species 3.8 million years ago. In stark contrast, humans only make up a small part of the history of planetary life - a mere 0.00789%. We are the latecomers, strangers to our home, and yet we are trying to change the ancient civilisation of organisms in attempts to refine our society and surpass our past, wreaking havoc and degradation to the natural world in the process. Instead of being the invaders of Earth, we should respect it by making products, systems and cities indistinguishable from the natural world and by adapting their form, process and ecosystem as animals, plants, and microbes are the consummate engineers of our world. Instead of only learning from the designs of other humans, we should become the apprentices of Mother Nature. Sustainable solutions to our globally intractable problems are intrinsic to our planet. The innovations of our future have already been predestined through the millennia of evolution. All we have to do is go outside, observe, analyse, and learn - to become the pioneers of our generation.

69


70

PHYSICS AND TECHNOLOGY

Bibliography

[1] Biomimicry Institute “What is biomimicry?” biomimicry.org,(n.d) https://biomimicry. org/what-is-biomimicry/ [2] Benyus, J. TED talk “Biomimicry in action” Youtube (2009, August 7) https://www.youtube.com/watch?v=k_GFq12w5WU&t=813s [3] Haubursin, C., Mars, R., Kohlstedt. K. “The world is poorly designed. But copying nature helps” Vox Youtube (2017, November 9) https://www.youtube.com/watch?v=iMtXqTmfta0&t=120s [4] McKeag, T. “Auspicious Forms: Designing the Sanyo Shinkansen 500-Series Bullet Train” medium (2020, June 2) https://medium.com/@cmcdonaldc/auspicious-forms-designing-the-sanyoshinkansen-500-series-bullet-train-d91d08945bb6 [5] Sharklet “The Technology of Sharklet” sharklet.com (n.d) https://www.sharklet.com/our-technology/technology-overview/ [6] Sharklet “Inspired by Nature - The Discovery of Sharklet” sharklet.com (n.d) https://www.sharklet.com/our-technology/sharklet-discovery/ [7] WHO “Healthcare-associated infections FACT SHEET” who.int (n.d) https://www.who.int/gpsc/country_work/gpsc_ccisc_fact_sheet_en.pdf [8] Howley, A. “High-Speed Train inspired by Kingfisher” AskNature (n.d.) https://asknature.org/idea/shinkansen-train/ [9] Marshall, A., Lozeva, S “Questioning the theory and practice of biomimicry” (2009, August) https://www.researchgate.net/publication/235990489_Questioning_the_theory_and_ practice_of_biomimicry


PHYSICS AND TECHNOLOGY

Brain Compu ter Inter face: The Past, Present and Fu ture Daniel K an

71

1. Introduction A Brain Computer Interface (BCI) is a direct communication pathway in which the brain communicates with an external device. This allows the brain to directly communicate with the external device purely using thoughts. In this article, I will be exploring the history of BCI, how they work , possible applications, challenges and future prospects.

2. The past 2 .1 T h e d i s c o v e r y o f E E G and the invention of BCI Electroencephalography (EEG), a noninvasive method used to detect electrical brain signals was first discovered in July 1924 when Hans Berger recorded human brain waves by attaching electrodes to the p a t i e n t ’s s c a l p . H e p u b l i s h e d h i s f i r s t p a p e r on EEG in 1929 where he studied brain wave changes during mental activity and sleep[1]. The term Brain Computer Inter face was first used by Professor Jacques Vidal in 1973 where he describes a “man-computer dialogue”[2].

2.2 Early uses of BCI In 1998 Philip Kennedy implanted the first BCI into a human being, which allowed his patient(who was completely paralyzed) to control a cursor on a computer screen by thinking. The electrode contained glass cones coated with neurotrophic proteins, which allowed it to bind with the extracellular matrix of the cerebral cor tex[3]. This allowed it to detect signals wi thou t wire s going through the sk in[4]. In 2004 Matt Nagle became the first human to control an artificial physical hand using Cyberkinetics BrainGate. The BrainGate implant consists of 96 microelectrodes i m p l a n t e d i n t o Na g e l ’s m o t o r c o r t e x a n d detects electrical impulses associated with the planning of movements. Then, the signals from the impulses are transmitted to computers to be translated. This allowed him to control the prosthetic hand and a cursor on a computer screen. [5]


72

PHYSICS AND TECHNOLOGY

3. The present 3 .1 H o w d o B C I s w o r k ? The system of a Brain Computer Interface is split up into 4 stages: Signal acquisition, signal preprocessing, feature extraction, and classification.

Figure 1. Flow char t of stages of BCI

Non-invasive BCIs: The se are plac e d ou t side the skull (usuall y on the scalp). These have the weakest signal strength because of the limited electrical conductivity of the skull. In addition, the electrodes cannot be placed directly in the desired part o f t h e b r a i n . H o w e v e r, t h e s e a r e c o n s i d e r e d to be very safe compared to the other types. During signal acquisition, signals in the brain are recorded. There are 3 ways to do t h i s : n o n - i n v a s i v e l y, s e m i - i n v a s i v e l y a n d i n v a s i v e l y.

EEG

M E G

One of the most popular non-invasive techniques for signal acquisition is electroencephalography (EEG), where electrodes are placed on the surface of the scalp. These detect electrical activity generated by the firing of neurons in the brain and then digitizes them and sends t h e m t o a n a m p l i f i e r. E E G s t e n d t o have a good temporatl resolution but a lacking spatial resolution. Different modalities of EEG systems detect different types of signals, including Slow Cortical Potential, Sensorimotor Rhy thms, and P300 potentials.

Other techniques include m a g n e t o e n c e p h a l o g ra p h y (MEG), which detects the magnetic fields produced by the brain. This has a higher spatial resolution than EEG.

Figure 2 . A person wearing an EEG headset


PHYSICS AND TECHNOLOGY

fMRI

NIRS

Functional magnetic resonance imaging (fMRI) is another method which detects changes in blood oxygen levels in the brain. When neuronal activity increases, there is an increase in the demand of oxygen, which increases the blood flow to that area. The difference in magnetic properties in oxygenated and deoxygenated haemoglobin are then detected. This is known as blood oxygenation level dependent (BOLD) imaging. This has a high spatial resolution but a lower temporal resolution.[ 7 ]

Near-infrared spectroscopy(NIRS) also detects changes in blood oxygen levels in the brain. It works by emitting light with wavelengths of 600nm - 1000nm into the brain tissue, which is then partly absorbed and partly scattered by chromophores in the blood (mainly haemoglobin). A detector receives the light that is reflected, and from this it is able to calculate the concentration of chromophores in the tissue using the lambert-beer law[12].

73


74

PHYSICS AND TECHNOLOGY

Semi invasive BCIs: These are implanted inside the skull but outside the brain instead of in g r e y m a t t e r. T h e s i g n a l s t r e n g t h of these are generally weaker compared to invasive BCIs but stronger than non-invasive BCIs. These are less dangerous than invasive BCIs because there is less risk of scar-tissue build-up[8].

ECoG Electocoriography(ECoG) is a semiinvasive technique which measures electrical signals in the cerebral cortex using electrodes placed directly on extracor tical areas of the brain either inside the dura mater (subdural) or outside the dura mater (epidural). It is similar to EEG, but has a higher spatial resolution because the electrical signals do not need to go through the scalp. It also has a higher SNR (signal to noise ratio). Figure 3. Where the ECoG electrodes are placed

Invasive BCIs: These are implanted directly i n t o t h e h u m a n b r a i n b y s u r g e r y. These produce the highest quality signals because they rest in grey m a t t e r. H o w e v e r , t h e y a r e a l s o considered the most dangerous because they require surgery and are prone to scar-tissue build-up[8]. Intracor tical electrodes are placed inside the grey matter of the brain. It requires implanting arrays of microelectrodes in the brain to directly measure single unit activity (SUA) or multi unit activity (MUA)[9]. Microelectrodes can be classified into 3 types: Microwires which are usually made of metals, micromachined electrodes which are made of silicon, and flexible electrodes which are made of flexible materials (mostly polymers)[9].


PHYSICS AND TECHNOLOGY

3 .1 . 2 . S i g n a l preprocessing After brain signals are acquired, they need to be preprocessed for feature extraction. This includes removing noise/artifacts such as e y e b l i n k i n g . To d o t h i s , d i f f e r e n t a l g o r i t h m s / filters are used to process the data. Linear filtering is used to remove artifacts located in signals whose frequency do not overlap with the frequency of brain signals. Spatial filtering is used to create a new set of derived channels which enhance the separability of the data. E xamples of this include independent component analysis (ICA) and common spatial patterns (CSP). [10]

3 .1 . 3 . F e a t u r e e x t ra c t i o n After the brain signals have been optimized, features of the signals are ex tracted by using many different processes/methods. These features are later used in classification which uses these to classif y what the signals mean.

Block processing Prior to feature ex traction, the signal samples are organized into consecutive, overlapping sample blocks. A feature vector is then created within each sample block (although for some it is not necessar y to do this every block), and this is fed to the translation algorithm. The length and overlap of the block should be based on the temporal dynamics of the signal, the feature extraction method, the nature of the application and the user feedback. Computing vectors more than necessary is a waste of computing power and time.

Correlation and template matching This uses a predefined template to compare correlations between the template and the signals. Segments that closely resemble the template will give high values, and segments that do not resemble the template will give low values.

75


76

PHYSICS AND TECHNOLOGY

Frequency/Spectral features: B a n d p o w e r, F F T, A R Band power First, a band-pass filter is applied to isolate the frequency of interest. This is then squared or the absolute value is taken to produce only positive signals. Then, it is smoothed using integration or a l o w - p a s s f i l t e r.

Fast fourier transformation (FFT ) FFT is a less complex and time consuming implementation of discrete fourier transformation(DF T ). This transforms a time domain into its frequency domain. Using this, the signal is represented as the sum of many amplitude scaled and time shifted sinusoids at a specific frequency[11], which can be used as a feature.

Auto regressive modeling (AR) AR is a feature extraction technique which acts as an alternative to fourier based methods. It models EEG signals as the output random signal of a linear t i m e i n v a r i a n t f i l t e r, w h e r e t h e i n p u t

ais white noise. [12] This is suitable for EEG since it is essentially a mixture of firing synapses and neurons measured at different locations, which is similar to filtering white noise with anIIR filter[11]. The aim of AR is to get the filter coefficients, which will be used as a feature of the signal. The spectral resolution of AR can be better than fourier based methods because it is not limited by the shor t time segments[13].

Wavelet analysis In wavelet analysis, a characteristic time-limited pulse shape called a mother wavelet is used to construct a template for each bandpass filter in a filter bank. The output of a template filter when the signal is inputted will have a comparatively large magnitude if the section of the signal inputted is similar t o t h e f i l t e r. T h i s p r o c e s s o f f i l t e r i n g t h e signal is repeated multiple times using different stretched and compressed versions of the same mother wavelet. This produces a unique time frequency content for the signal for each scaled mother wavelet[11]. Many different mother wavelets can be selected for BCI based on what features need to be extracted.


PHYSICS AND TECHNOLOGY 3 .1 . 4 . C l a s s i f i c a t i o n Classification is the process where the BCI recognises the user 's intentions. There are many algorithms to do this, such as:

77

Neural networks (NN) A Neural network is a computer system modelled on the human brain used to classif y things and make predictions. Neural networks are made of multiple neurons organised i n t o l a y e r s : t h e i n p u t l a y e r, t h e h i d d e n l a y e r s , a n d t h e o u t p u t l a y e r. E a c h neuron in a layer is connected to each neuron in the layers before and after it , and these have a weight assigned to it. Each neuron has a bias and an activation function assigned to it. Y=X*W+B

Su pp o r t Ve c to r Machine(SVM) SVM is a classifier that constructs a hyperplane (or hyperplanes) in order to separate the features into different classes. A hyperplane is a decision boundar y which separates a set of values into 2 or more classes of data points. SVM maps the data into a high dimensional space and then selects the hyperplane that maximises the margins, which means that it is trying to maximise the distance between the nearest data points and the hyperplane[14]. This classifies the data points depending on which side of the hyperplane(s) they are on.

Where X is the inputs from the previous layers, W is weights, and B is the bias. Y is fed into the activation function which is then outputted t o t h e n e x t l a y e r. T h i s p r o c e s s i s repeated until the output layer is reached, and it makes a prediction. This is known as forward propagation. We t h e n u s e t h e e rro r o f t h e o u t p u t t o move backwards through the network of neurons and calculate the error of each neuron. The weights and biases are then tweaked to optimize the neural network and reduce the error of the output. This is known as back propagation. The process of forward and back propagation is repeated to train the neural network[17 ]. This allows it to “learn” from examples.

Linear discriminant analysis LDA has a low computational requirement which makes it widely used in BCI applications[16]. LDA also constructs hyperplanes to separate features into classes, but instead maximises the distance between the means of the 2 classes and minimizes the variation between e a c h c a t e g o r y. [ 1 5 ] T h e r e a r e a l s o improved versions of LDA, such as Fisher LDA and Bayesian LDA.

input layer

hidden layer 1

hidden layer 2

ouput layer

Figure 4. Structure of a neural network


78

PHYSICS AND TECHNOLOGY

3.2 Current applications/uses of BCI

Convolutional Neural networks (CNN) Convolutional neural networks are a type of neural networks that contain one more c o n v o l u t i o n a l l a y e r. E a c h c o n v o l u t i o n a l layer consists of multiple kernels, which are filters that iterate over the data in order to extract features. Since the CNN already does feature extraction, they require minimal preprocessing and feature extraction beforehand, which means that some can be fed raw brain signals[25]. This is one of the main advantages of CNNs. CNNS are also commonly used for computer vision/image classification.

Motor restoration Brain computer interfaces can help patients with mobility issues or people who are “locked in” to restore lost functions. For example, patients who have lost

sensory and motor function in their limbs can use functional electrical stimulation (FES) to stimulate their muscles to move again. BCI can be used to generate the control signal for FES which can cause artificial muscle contractions[14]. BCI can also be used to let patients who c a n n o t wa l k c o n t r o l a w h e e l c h a i r, o r a l l o w patients to control a prosthetic limb.

Communication BCI spellers can be used to communicate by typing out letters.. An example of this include the P300 matrix speller by Farwell and Donchin[18], the first BCI speller which had a maximum accuracy of 95% and a speed of approximately 1 character selected ever y 26s. This is slow compared to conventional typing, but this is very useful for people who can only communicate using these methods.[19] Patients can also use BCI to control a cursor on a computer screen

Environmental control BCIs can be used to help paralyzed patients by allowing them to control their surroundings, such as the lights, DVD p l a y e r, T V e t c . [ 2 0]

Figure 5. A paralysed woman who was able to control a robotic arm to grasp a bottle of coffee


PHYSICS AND TECHNOLOGY

79

Illustration by Mary Zhang

Virtual reality and gaming Whilst the other sections have focused on improving the quality of life of disabled patients, BCI can also be used by healthy people for entertainment. Simple Games such as Pong, Pacman, pinball, racing games, and other games have already been developed on BCI systems[20].

Biometrics Biometrics is the process of identif ying one individual among other by biological means[20]. Examples include facial recognition, fingerprint scanning, voice recognition, etc. EEG brain waves can be used to identif y humans due to the stability of the brainwaves[20]. This is much more secure as it is much harder to mimic.


80

PHYSICS AND TECHNOLOGY 4 .1 C h a l l e n g e s The training process 4. The future

Tr a i n i n g t h e u s e r i s a t i m e consuming process, which requires the user to deal with the system and learn to control their brain feedback signals in the preliminary phase, and in the calibration phase, the subject 's signals will have to be u s e d t o t r a i n t h e c l a s s i f i e r. [ 2 2 ]

Psychological and Neurological Some psychological factors (eg. attention, memor y load, fatigue) and characteristics of the user (age, g e n d e r, lifestyle) can affect brain dynamics, which can affect BCI per formance. Also, physiological features such as resting state heart rate variability can also affect BCI performance. Furthermore, the formation of human brains are complex and diverse across subjects. This requires a more robust BCI system to make the system more generalized. [21] Around 15-30% of people are not able to produce brain signals robust enough to operate a BCI[21]. This can be due to the users inability to produce signals or other technological limitations.

Te c h n o l o g i c a l challenges EEG based signal acquisition hardware will ideally be small, fully portable, convenient, comfortable, easy to set up, don’t require maintenance and perform well in all environments. In implantable electrode based BCIs, they will need to be safe, remain intact, reliable for long periods of times, record stable s i g n a l s c o n s i s t e n t l y, b e c h a r g e d , comfortable and convenient.


PHYSICS AND TECHNOLOGY To d o t h i s , i t i s s t i l l unclear which solution is the most successful, and further development is required. Also, invasive BCIs have substantial costs for implantation and maintenance, and whilst the cost of non-invasive BCIs are r e l a t i v e l y l o w e r, t h e y t o o r e q u i r e ongoing technical suppor t. [23]

Information transfer rate(ITR)

5. Ethical concerns with BCIs

The ITR is an evaluation metric used for BCI systems A higher ITR is needed for communication devices in everyday uses such as gaming, VR and communication. Compared to traditional inputs such as keyboard and mice, BCI inputs

are

much

s l o w e r.

[24]

4.2 Opportunities and future applications Whilst currently BCI applications are mainly focused in helping paralysed people, in the future BCIs can have applications in our day to day lives. BCIs can make our day to day processes more efficient by removing the requirement of a physical keyboard or mouse, and in the future can even be faster than these methods. This will allow faster and easier c o m m u n i c a t i o n. We c a n b e a b l e to control a gaming character on a computer screen or in VR/AR using our thoughts. BCI in arts (artistic BCI) can be used for musical production, drawing and painting, etc. In the future we can also use BCI to control robots, d r o n e s o r v e h i c l e s r e m o t e l y. T h i s means that we can send a robot to do work in areas unsuitable for humans such as in coal mines or in space[21] and control t h e m u s i n g o u r b r a i n s r e m o t e l y.

5 .1 P r i va c y a n d security[26] BCIs can directly detect brain signals, which raises concerns of the violation of user p r i v a c y. I f B C I s c a n e x t r a c t information from the brain, then the BCI might be able to read our minds and extract our thoughts and memories. This can expose information without the user 's consent, such as the truthfulness o f w h a t t h e y s a y, a t t i t u d e s towards other people, etc. Also, whilst conducting research experiments, researchers may accidentally discover sensitive i n f o r m a t i o n a b o u t t h e B C I u s e r. Also, hackers can gain control of BCI devices. This can be used to extract information i n t h e u s e r ’s t h o u g h t s o r t h e BCI can be exploited to cause malfunctions which harm the u s e r. T h e r e f o r e , s e c u r i t y i s a n important concern in BCI ethics.

81


82

PHYSICS AND TECHNOLOGY

5.2 Justice/ Fairness

Once BCIs are widely available to the general public, problems relating to justice and fairness may arise. The potential use of BCIs can enhance a healthy user 's capabilities beyond normal[26]. People who cannot access BCIs or decide to opt-out due to other reasons can be put at a significant disadvantage. This can lead to social stratification and unfairness. Also, many invasive BCIs can only be done once per individual, which makes us reflect on whether or not an individual should be implanted with a device that may become outdated or outperformed in the future.

5.3 Responsibility Another ethical/legal concern is whether or not the BCI user should be fully responsible over the output of the BCI. Since BCIs capture intent directly using brain signals, subconscious events or passing thoughts can result in output of the BCI. Also, machine learning classification algorithms may misinterpret the intent of t h e u s e r. H o w c a n w e k n o w i f the output of a BCI is due to conscious actions or a mistake in the BCI? This creates an argument of whether the user should be held responsible for this output.

6. Conclusion In conclusion, at present there are many useful technologies used for signal acquisition, signal preprocessing, feature extraction, and classification that make BCIs possible for use. Whilst past and present BCIs were mainly focused on improving the quality of life of disabled patients, I believe that in the future BCIs could be widely used by healthy people to improve their day to day lives. H o w e v e r, b e f o r e t h i s i s p o s s i b l e , many technological challenges need to be overcome, and many ethical concerns need to be agreed upon. Overall, I think that in the future, BCI is going to be an important technology that will be part of our daily lives.


PHYSICS AND TECHNOLOGY

7. B i b l i o g r a p h y

[1] Ince , Rümeysa, et al. “ The Inventor of Electroencephalography (EEG): Hans Berger ( 1 8 7 3 – 1 9 4 1 ) .” Child's Nervous System, Springer B e r l i n H e i d e l b e r g , 5 M a r. 2 0 2 0, l i n k . s p r i n g e r. c o m / a r t i c l e / 1 0.1 0 0 7 / s 0 0 3 8 1 - 0 2 0 - 0 4 5 6 4 - z . [ 2 ] V i d a l , J J . “ To w a r d D i r e c t B r a i n - C o m p u t e r C o m m u n i c a t i o n .” A n n u a l R e v i e w o f B i o p h y s i c s a n d B i o e n g i n e e r i n g , v o l . 2 , n o . 1 , 1 9 7 3 , p p . 1 5 7– 1 8 0. , d o i : 1 0.1 1 4 6 / a n n u r e v. b b . 0 2 . 0 6 0 1 7 3 . 0 0 1 1 0 5 . [ 3 ] N e u r a l S i g n a l s , I n c . , P h i l i p K e n n e d y, I m p l a n t e d B r a i n M i c r o e l e c t r o d e , A L S Tr e a t m e n t , E m e r g e M e d i c a l , w w w. neurotechreports.com/pages/neuralsignalsprofile. html. [4 ] H o y, K a t e . “ Te c h H i s t o r y : H u m a n T h o u g h t s C o n t r o l C o m p u t e r.” I D G C o n n e c t , I D G C o n n e c t , 1 O c t . 2 0 1 7, w w w. i d g c o n n e c t . c o m / a r t i c l e / 3 5 8 1 2 7 0 / t e c h - h i s t o r y h u m a n - t h o u g h t s - c o n t r o l - c o m p u t e r. h t m l . [ 5 ] D u n c a n , D a v i d E w i n g . “ I m p l a n t i n g H o p e .” M I T Te c h n o l o g y R e v i e w, M I T Te c h n o l o g y R e v i e w, 2 A p r. 2 0 2 0, w w w.t e c h n o l o g y r e v i e w. c o m / 2 0 0 5 / 0 3 / 0 1 / 2 3 1 4 8 7 / implanting-hope/. [6 ] Anupama.H.S, et al. “BRAIN COMPUTER I N T E R FA C E A N D I T S T Y P E S - A S T U D Y ”, M a y 2012, https ://citeseer x .ist.psu.edu/viewdoc/ d o w n l o a d ? d o i = 1 0.1 .1 . 6 6 5 . 6 2 8 0 & r e p = r e p 1 & t y p e = p d f [ 7 ]“How FMRI Wo r k s .” OpenLearn, The Open U n i v e r s i t y, 3 0 A u g . 2 0 1 9 , w w w. o p e n . e d u / o p e n l e a r n / body-mind/health/health-sciences/how-fmriworks#:-: tex t=Copyright : FMRIB Centre Functional magnetic,increases to the active area. [ 8 ] “ N e a r I n f r a r e d S p e c t r o s c o p y.” N e a r I n f r a r e d S p e c t r o s c o p y - a n O v e r v i e w | S c i e n c e D i r e c t To p i c s , w w w. s c i e n c e d i r e c t . c o m / t o p i c s / n e u r o s c i e n c e / n e a r i n f r a r e d - s p e c t r o s c o p y. [9] Salahuddin, Usman, and Pu-X ian Gao. “Signal Generation, Acquisition, and Processing in Brain M a c h i n e I n t e r f a c e s : A U n i f i e d R e v i e w.” F r o n t i e r s in Neuroscience, vol. 15, 2021, d o i : 1 0. 3 3 8 9 / fnins. 2021.728178. [ 1 0 ] H a m m o n , Pa u l S . , a n d V i r g i n i a R . D e S a . “Preprocessing and Meta-Classification for BrainC o m p u t e r I n t e r f a c e s .” I E E E Tr a n s a c t i o n s o n B i o m e d i c a l E n g i n e e r i n g , v o l . 5 4 , n o . 3 , 2 0 0 7, p p . 5 1 8 – 5 2 5 . , d o i : 1 0.1 1 0 9 / t b m e . 2 0 0 6 . 8 8 8 8 3 3 . [ 1 1 ] Wo l p a w, J o n a t h a n R . , e t a l . “ B r a i n - C o m p u t e r I n t e r f a c e s : D e f i n i t i o n s a n d P r i n c i p l e s .” B r a i n - C o m p u t e r I n t e r f a c e s H a n d b o o k o f C l i n i c a l N e u r o l o g y, 2 0 2 0, p p . 1 5 – 2 3 . , d o i : 1 0.1 0 1 6 / b 9 7 8 - 0 - 4 4 4 - 6 3 9 3 4 - 9 . 0 0 0 0 2 0. [ 1 2 ] K , M a n j u l a , a n d M B . a n a n d a r a j u . “A C o m p a r a t i v e Study on Feature E x traction and Classification of Mind Wa v e s f o r B r a i n C o m p u t e r i n t e r f a c e ( B C I ) .” I n t e r n a t i o n a l J o u r n a l o f E n g i n e e r i n g & Te c h n o l o g y, v o l . 7, n o . 1 . 9 , 2 0 1 8 , p . 1 3 2 . , d o i : 1 0.1 4 4 1 9 / i j e t .v 7 i 1 . 9 . 9 74 9 . [ 1 3 ] K r u s i e n s k i , D e a n J . , e t a l . “A n E v a l u a t i o n o f Autoregressive Spectral Estimation Model Order f o r B r a i n - C o m p u t e r I n t e r f a c e A p p l i c a t i o n s .” 2 0 0 6 International Conference of the IEEE Engineering i n M e d i c i n e a n d B i o l o g y S o c i e t y, 2 0 0 6 , d o i : 1 0.1 1 0 9 / iembs.2006.259822 . [14] Nicolas-Alonso, Luis Fernando, and Jaime G o m e z - G i l . “ B r a i n c o m p u t e r i n t e r f a c e s , a r e v i e w.” S e n s o r s ( B a s e l , S w i t z e r l a n d ) v o l . 1 2 , 2 ( 2 0 1 2 ) : 1 2 1 1 -7 9 . d o i : 1 0. 3 3 9 0 / s 1 2 0 2 0 1 2 1 1

83

[ 1 5 ] Va i b h a w, e t a l . “ B r a i n – C o m p u t e r I n t e r f a c e s a n d T h e i r A p p l i c a t i o n s .” A n I n d u s t r i a l I oT A p p r o a c h f o r P h a r m a c e u t i c a l I n d u s t r y G r o w t h , 2 0 2 0, p p . 3 1 – 5 4 . , d o i : 1 0.1 0 1 6 / b 9 7 8 - 0 - 1 2 - 8 2 1 3 2 6 - 1 . 0 0 0 0 2 - 4 . [16] Aggar wal, Swati, and Nupur Chugh. “Signal P r o c e s s i n g Te c h n i q u e s f o r M o t o r I m a g e r y B r a i n C o m p u t e r I n t e r f a c e : A R e v i e w.” A r r a y, v o l . 1 - 2 , 2 0 1 9 , p . 1 0 0 0 0 3 . , d o i : 1 0.1 0 1 6 / j . a r r a y. 2 0 1 9 .1 0 0 0 0 3 . [ 1 7 ] Y i u , To n y. “ U n d e r s t a n d i n g N e u r a l N e t w o r k s .” M e d i u m , To w a r d s D a t a S c i e n c e , 2 9 S e p t . 2 0 2 1 , towardsdatascience.com/understanding-neuraln e t w o r k s - 1 9 0 2 0 b 7 5 8 2 3 0. [ 1 8 ] F a r w e l l , L . a . , a n d E . D o n c h i n . “ Ta l k i n g o f f t h e To p o f Yo u r H e a d : t o w a r d a M e n t a l P r o s t h e s i s U t i l i z i n g E v e n t R e l a t e d B r a i n Po t e n t i a l s .” E l e c t r o e n c e p h a l o g r a p h y a n d C l i n i c a l N e u r o p h y s i o l o g y, v o l . 7 0, n o . 6 , 1 9 8 8 , p p . 5 1 0 – 5 2 3 . , d o i : 1 0.1 0 1 6 / 0 0 1 3 - 4 6 9 4 ( 8 8 ) 9 0 1 4 9 - 6 . [ 1 9 ] R e z e i k a , Ay a , e t a l . “ B r a i n – C o m p u t e r I n t e r f a c e S p e l l e r s : A R e v i e w.” B r a i n S c i e n c e s , v o l . 8 , n o . 4 , 2 0 1 8 , p . 5 7. , d o i : 1 0. 3 3 9 0 / b r a i n s c i 8 0 4 0 0 5 7. [ 2 0 ] R a s h i d , M a m u n u r, e t a l . “ C u r r e n t S t a t u s , C h a l l e n g e s , a n d Po s s i b l e S o l u t i o n s o f E E G - B a s e d B r a i n - C o m p u t e r I n t e r f a c e : A C o m p r e h e n s i v e R e v i e w.” F r o n t i e r s i n N e u r o r o b o t i c s , v o l . 1 4 , 2 0 2 0, d o i : 1 0. 3 3 8 9 / f n b o t . 2 0 2 0. 0 0 0 2 5 . [21] Saha, Simanto, et al. “Progress in Brain Computer I n t e r f a c e : C h a l l e n g e s a n d O p p o r t u n i t i e s .” F r o n t i e r s i n S y s t e m s N e u r o s c i e n c e , v o l . 1 5 , 2 0 2 1 , d o i : 1 0. 3 3 8 9 / fnsys. 2021.578875. [ 2 2 ] A b d u l k a d e r, S a r a h N . , e t a l . “ B r a i n C o m p u t e r I n t e r f a c i n g : A p p l i c a t i o n s a n d C h a l l e n g e s .” E g y p t i a n I n f o r m a t i c s J o u r n a l , v o l . 1 6 , n o . 2 , 2 0 1 5 , p p . 2 1 3 – 2 3 0. , d o i : 1 0.1 0 1 6 / j . e i j . 2 0 1 5 . 0 6 . 0 0 2 . [23] Shih, Jerr y J et al. “Brain-computer inter faces in m e d i c i n e .” M a y o C l i n i c p r o c e e d i n g s v o l . 8 7, 3 ( 2 0 1 2 ) : 2 6 8 -7 9 . d o i : 1 0.1 0 1 6 / j . m a y o c p . 2 0 1 1 .1 2 . 0 0 8 [ 2 4 ] L u n , X i a n g m i n , e t a l . “A S i m p l i f i e d C N N Classification Method for MI-EEG via the Electrode Pa i r s S i g n a l s .” F r o n t i e r s i n H u m a n N e u r o s c i e n c e , v o l . 1 4 , 2 0 2 0, d o i : 1 0. 3 3 8 9 / f n h u m . 2 0 2 0. 0 0 3 3 8 . [25] Cattan, Grégoire. “ The Use of Brain–Computer Inter faces in Games Is Not Ready for the General P u b l i c .” F r o n t i e r s i n C o m p u t e r S c i e n c e , v o l . 3 , 2 0 2 1 , d o i : 1 0. 3 3 8 9 / f c o m p . 2 0 2 1 . 6 2 8 7 7 3 . [26] Burwell, Sasha, et al. “Ethical Aspects of Brain C o m p u t e r I n t e r f a c e s : a S c o p i n g R e v i e w.” B M C M e d i c a l E t h i c s , v o l . 1 8 , n o . 1 , 2 0 1 7, d o i : 1 0.1 1 8 6 / s 1 2 9 1 0 - 0 1 70 2 2 0 - y. Figure 2 from https ://cacm.acm.org/ magazines/2011/5/107 704-brain-computer-inter facesfor-communication-and-control/fulltex t Figure 3 from Electrocor ticography

h t t p s : / / e n .w i k i p e d i a . o r g / w i k i /

Figure 4 from https ://towardsdatascience.com/ applied-deep-learning-part-1-artificial-neuralnetworks-d7834f67a4f6 F5 from https ://news.brown.edu/ar ticles/2012/05/ braingate2


84

PHYSICS AND TECHNOLOGY

The Beauty Of Chaos Kevin Liew

As Albert Einstein once proclaimed: “As far as the laws of mathematics refer to reality, they are not certain, and as far as they are certain, they do not refer to reality”. At its core, Chaos Theory is an intricate science revolving around nonlinear processes that are fundamentally impossible to predict or control, ranging from the weather and our brain states to stock markets and earthquakes. From the beating of a heart to the drift of planets across the starry skies, chaos is ever-present in our world. But how can such a vast, unpredictable, and uncontrollable concept like chaos be expressed in a numerical format, and how can we adapt to its ever-growing prominence within our society?


PHYSICS AND TECHNOLOGY From a Butterfly to a Hurricane In 1961, an MIT meteorologist professor by the name of Edward Lorenz came across this startling revelation. Lorenz developed a mathematical model that enabled him to simulate weather patterns a few minutes in advance by employing the use of numerical values that represented the current weather. One day, he repeated an earlier simulation that he ran before, except instead of taking the exact value of one of the variables, he rounded one of the variables from .506127 to .506. From this small difference, Lorenz had inadvertently discovered the mathematical incarnation of chaos. Although both values produced identical products at first, they slowly diverged from each other, producing radically different results increasing in size and scale until they became incalculable. This phenomenon of a simple, deterministic equation being able to produce various outcomes given small changes in the input value was labelled as “deterministic chaos”, more infamously known as the “butterfly effect”. In short, the seemingly insignificant disturbances in the atmosphere can build up over time towards an unanticipated drastic outcome. Chaotic systems, like the weather, are generally sensitive to initial conditions as their output values drastically differ depending on the input values. There are multiple nonlinear parameters within these chaotic systems that have to be accounted for, making it hard to predict the result. One notable model of deterministic chaos is the pinball machine, as even the smallest difference in its starting position and speed can potentially result in the pinball bouncing off different bumpers. Nothing is guaranteed, and there will never be two indistinguishable games.

85

Not only did Lorenz unintentionally make a major breakthrough in one of the most important mathematical concepts in shaping the world as we know it, but he also uncovered one of its key fundamentals. When attempting to graph his data over several axes, he came across a peculiar observation that for two nearby points undergoing the process of iteration, the line produced when connecting the points would grow increasingly apart from each other with each new iteration. However, for points away from the region of the line, they would eventually converge towards each other. This complex contradictory system is known as a “strange attractor”, with Lorenz’s unique dynamics being named after himself: the “Lorenz attractor”. Other strange attractors were discovered later including the Hénon attractor in 1976, which all had self-similar structures as noted by French-Polish mathematician Benoit Mandelbrot. This will be explored in more depth later on, but for now, we shall define what we mean by chaos in the mathematical sense. .

1In

an analogy for his findings, Lorenz stated that the flapping of a butterfly’s wings in the Amazon could potentially in turn cause a hurricane or tornado in China. Without the butterfly flapping its wings at that specific spot in that precise time and space, that hurricane would not exist in the future.


86

PHYSICS AND TECHNOLOGY

A Map to Fractals

One of the most common examples of the Chaos theory is the logistic map, a discrete iterative mathematical function popularised by mathematical biologist Robert May, that maps the population value at any point in time to its value at the subsequent point in time. This model can be represented as the equation: xn+1 = r * xn (1 - xn), with r representing the rate of growth, xn as the population for that specific year and xn+1 being the population for the next year. It should also be noted that the population (x) is expressed as a fraction between 0 and 1, with 0 representing extinction and 1 being the maximum possible population.


PHYSICS AND TECHNOLOGY It can be determined that a stage of equilibrium will be reached in any population after enduring many fluctuations of varying degrees. Suppose we visualise this set of data as a bifurcation diagram. In that case, we notice that though at the beginning for smaller values of r there is only one line, for larger values, it can break up into several lines and become completely chaotic. We can notice that for smaller values of r, between 0 and 1, the population cascades towards the state of extinction. In contrast, for larger values of r, between 1 and 3, the population may converge towards a single value. For values of r greater than 3, the graph bifurcates (breaks into two different lines) due to the population now fluctuating between two possible values. For larger values of r, the bifurcation intensifies and multiple bifurcations lead to the graph becoming more chaotic and unpredictable. However, there are certain brief periods of “order” at the onset of chaos, where the points become predictable. After a period of chaos, these abruptly disappear briefly, before doubling and becoming more chaotic in an endless perpetual cycle. The mathematician Mitchell Feigenbaum concluded that this scaling property was crucial to unlocking the mysteries regarding such perplexing

87

systems, which could also be applied to other nonlinear systems in the real world.

The chaotic part of the graph can be labelled as a fractal, an infinitely complex pattern that is self-similar throughout different scales. Fractals follow the trends of chaotic behaviour, enabling us to express a vast range of dynamical systems as physical manifestations of chaos. It is materialised in our world through various forms, such as the identically-shaped leaves on trees and ferns, the branching tributaries in river deltas, and the shapes of mountain ranges.


88

PHYSICS AND TECHNOLOGY

The Chaos Game

In 1988, a British mathematician named Michael Barnsley developed an algorithm for creating fractals that he coined “The Chaos Game”. The concept for this revolutionary innovation was based on an iterated function system, which uses a finite set of contraction mapping within a metric space to generate unique attractors that form the layout of a fractal. The sensation of chaos originates from the indescribable connection between the simple rules of the procedure and the infinitely-complex fractals produced.


PHYSICS AND TECHNOLOGY The set of instructions are as follows: 1. Fix n numbers of points in the plane to join up and form a shape 2. Select any random point within the shape 3. Choose one of the original points at random, and notate the point in between this said point and the randomly selected point inside the shape 4. Repeat the process several times to continually generate new points Although it is practically impossible to foresee the eventual structure developed, when we map the repeated process hundreds or thousands of times, we notice a trend in the areas where points are highly concentrated (ignoring the initial random point). For example, when inputting three points that join up to form an equilateral triangle in the first step, the structure produced results in being none other than the Sierpeński triangle. At first glance, we can see that this selfsimilar fractal is composed of an infinite number of smaller triangles, giving it a virtually unlimited perimeter that exponentially increases by a factor of ¾ for each layer of triangles, whilst having a finite area that geometrically decreases by that same value. This property has been used for designing specialised fractal antennas with a self-similar composition to maximise the length of material that

89

can absorb and transmit electromagnetic radiation within a small total surface area or volume. These compact, light antennas are inherently multiband, and they are capable of providing gratifying performance rates at several different frequencies, simultaneously helping with the transition of cellular networks from 4G to 5G that lower system costs and reduce the need for more antennas to be constructed. Another infamous fractal produced through the chaos game comes from its creator, namely the Barnsley fern. We can use linear algebra to express the rules of the Chaos game as several recursive affine transformations, which maintain the collinearity of all points as well as distance ratios after a geometric transformation. This can be used to generate a much richer collection of fractals, expanding the range from solely geometric fractals to those found in pure nature. For the Barnsley fern, only four affine transformations with a combined total of 28 numbers are required for it to be programmed onto a computer screen within seconds. It is even possible to create mutant fern variants by changing one of the variables. This centrepiece has been incorporated within the digital universe by enabling the compression of massive data in its essence into smaller data sets that can later be enlarged and reconstructed by utilising computer algorithms (like ZIP files).


90

PHYSICS AND TECHNOLOGY

The Thumbprint of God

Referred to by some as “The Thumbprint of God’’, the Mandelbrot set (named after the mathematician Benoit Mandelbrot) is amongst one of the most stunning projections of mathematics in its purest form. Constructed from a two-dimensional complex number plane, the Mandelbrot set follows the equation: Zn = (Zn-1 )2 + C, with C being a complex number. Beginning with the assumption that n - 1 = 0, that can point to the equation Z1 = (Z0)2 + C. When iterating the equation, you will find that Zn+1 = (Zn)2 + C. If the results get infinitely larger for a specified value of C, then it can be determined that the given value of point C (on the complex number plane) is not part of the Mandelbrot set. Conversely, if the output values follow a repeating pattern and do not increase further by each step, point C lies within the set. For example, at C = -1, the results will follow the order of 0 and -1; therefore, -1 is a point within the Mandelbrot set. At C = -2, Z1 would be equal to -2, and every subsequent iteration would loop with the result of -2, so this point also lies in the set. However, at C = 1, the result will increase infinitely, so this is not part of the Mandelbrot set. After doing this with every possible point on the complex plane, a Mandelbrot set is created.

The Mandelbrot set contains an endless amount of sublime repeating patterns (which may be identical to the Mandelbrot set itself, but never an exact replicate) that could be explored by zooming into different parts. Theoretically, any pattern can be generated from this set as long as you magnify the right area. One mathematician by the name of Roger Penrose stated that the Mandelbrot set is evidence for mathematical realism, essentially stating that the set is so complex that it could not possibly have been invented but only discovered. There is a spiritual beauty to the Mandelbrot set. It is a reflection of the abundance of self-similar fractals ubiquitous in our atmosphere, to the point where some theorise that the universe itself is an autogenic fractal and any existing object can be mathematically generated. It is infamous for proving that a simple set of instructions is capable of producing infinitely complicated, and at certain points, chaotic results. This feedback loop exhibits “profound connections between fractal geometry and Chaos Theory” in Mandelbrot’s words.


PHYSICS AND TECHNOLOGY

Illustration by Annette Chan

91


92

PHYSICS AND TECHNOLOGY

Our World in Chaos


PHYSICS AND TECHNOLOGY

As cyberspace progressively evolved from a localised monolithic structure to a more globalised wireless format through the emergence of more complex technologies, system failures have become harder to notice in advance. To improve the resilience of modern computer systems against such failures, engineers have relied upon an empirical method known as chaos engineering. Based on the concepts of Chaos Theory, this practice studies the ability of computer systems to adapt and respond to potential random, unplanned issues that could propagate to catastrophic shutdowns. Instead of dreading the inevitable chaos, this allows us to simulate it ourselves within a controlled environment to build resilience and durability in the application to withstand such turbulent conditions. We can gain further insight into potential outages, locate the faults within the system and make improvements. This form of resilience testing was pioneered by none other than the content streaming giant, Netflix. The engineering team of the latter created a sandbox for chaos testing after transitioning to an Amazon Web Services (AWS) infrastructure. This migration to the public

93

cloud led to challenges of the service nodes randomly terminating, resulting in a hindered customer experience due to slower streams with lower quality. They created the tool ‘Chaos Monkey’ to prevent this, which would induce host failure by randomly disabling nodes in the production network that stored the whole platform’s inventory of films and TV shows. This later developed into a wide suite of failing-inducing tools, collectively known as the Simian Army. Each troop covers different failure types, such as Security Monkey, which inspects the system for potential vulnerabilities, Latency Monkey, which replicates service unavailability, and Chaos Kong, which recreates an entire regional outage. Netflix further built upon these foundations in October 2014 with the introduction of Failure Injection Testing (FIT), which protects customers from the impacts of chaotic experiments. This is done by supplying metadata that specifies the limits of a certain test as well as controlling the amount of failure testing that is allowed to occur.


94

PHYSICS AND TECHNOLOGY In this new era of system integration, chaos engineering helps to build the defensive capabilities of systems and reduces the number of outages that occur whilst refining customer experience. From a business perspective, it also helps to prevent revenue losses from unexpected server downtime. According to a June 2020 study, this could cost between $1-5 million per hour for around 40% of enterprise organisations. In light of this, an ever-rising number of enterprises are starting to recognise the value of this approach and are implementing it into their software like Uber, Facebook, and Google. Unlike any other variants of failure testing, chaos engineering enables a system to explore uncharted territory and navigate its way around a diverse scope of complex real-world issues. Chaos Theory has several other real-world applications, such as compressing digital data into smaller sets that can later be enlarged and reconstructed using computer algorithms (like ZIP files). Investors may employ chaotic analysis to predict fluctuations in the stock market and avoid sudden stock market crashes. Generating computer artwork through the use of chaos and fractals displayed in a simple formula allows animators to easily draw numerous distinct trees by using its infinite range of products. It can also help physiologists comprehend the abnormal ventricular fibrillation, as the byproduct of disorder in the chaotic system of the heart, or enable them to detect cancerous cells and bone fractures early on by noticing the fractal elements on their surfaces.

Conclusion Overall, Chaos Theory brings scholars from several different fields together to study the impact of chaotic behaviour on our daily lives. At the word level, the term “chaos” may insinuate randomness, unpredictability, and danger. However, upon closer inspection, we can begin to notice the precision and grace with which it has constructed and continues to construct — the fabric of all natural phenomena. It informs the way we view deterministic systems — a fundamental set of patterns actually governs their seemingly irregular states of disorder. It has been proven that even with the most advanced technology, we can never fully guarantee an accurate forecast of what is yet to come. However, accepting traces of irregularity within the order as a whole can be used to our benefit. For example, incorporating chaotic behaviour within weather forecast models can lead to more reliable forecasts. It serves as a reminder that ironically, through embracing chaos, we find a world of order with endless possibilities.


PHYSICS AND TECHNOLOGY

Bibliography

Halpern, Paul. (2018, February 13). ‘Chaos Theory, The Butterfly Effect, And The Computer Glitch That Started It All’ (https://www.forbes.com/sites/ startswithabang/2018/02/13/ chaos-theory-the-butterflyeffect-and-the-computerglitch-that-started-itall/?sh=6414e2f69f6c) Dey, Arpan. (2020, August 1). ‘Chaos Theory and Consciousness’ (https://ysjournal.com/chaostheory-and-consciousness/) Borwein, Jonathan. (2012, November 19). ‘Explainer: What is Chaos Theory’ (https://theconversation.com/ explainer-what-is-chaostheory-10620) Wolfe, Jonathan. (2009, March 25). ‘What is Chaos Theory?’ (http://fractalfoundation.org/ resources/what-is-chaostheory/) Banton, Caroline. (2021, March 17). ‘What Is Chaos Theory?’ (https://www.investopedia.com/ ask/answers/08/chaos-theory. asp) Vartikar, Neel. (2019, September 16). ‘Chaos Engineering: Why the World Needs More Resilient System’ (https://www.cuelogic.com/ blog/chaos-engineeri) Dearmer, Abe. (2020, December 31). ‘Chaos Engineering: Theory, Principles & Benefits’

(https://www.xplenty.com/blog/ what-is-chaos-engineering/) N/A. (2020, June 19). ‘Forty Percent of Enterprises Say Hourly Downtime Costs Top $1 Million’ (https://itic-corp.com/ blog/2020/06/forty-percentof-enterprises-say-hourlydowntime-costs-top-1million/) Butow, Tammy. (Last updated: 2021, May 5). ‘Chaos Engineering: the history, principles and practise.’ (https://www.gremlin. com/community/tutorials/ chaos-engineering-thehistory-principles-and-

95


96

PHYSICS AND TECHNOLOGY

Possible Alternative to Space Rockets Cheney Sang


PHYSICS AND TECHNOLOGY

97

Introduction Space elevators are exactly what they sound like: an elevator that transports you from the Earth straight into space. The idea depends on a cable from outer space attached to the surface of the Earth, allowing “climbers” (electric vehicles that would make their way up a cable) to be sent up into space carrying people or cargo. People have been considering space elevators for over a century. The concept made its first appearance in the 1890s,

with more technical work about it being published in a 1960s Russian journal. The idea fascinated many, motivating them to produce pieces of research on the subject even despite their limitations in terms of technology and available resources at the time. But in the modern day, this once fairytale idea seems to be within reach thanks to ever-improving technology and materials that has made the construction of the space elevator increasingly feasible.


98

PHYSICS AND TECHNOLOGY

The Advantages Of Space Elevators You might be wondering why space elevators are worth investing in if we already have relatively developed technology on rockets. Despite space elevators having a high initial cost of an estimated 15-30 billion dollars, a future with them would be much more efficient than rockets as they would enable us to use electricity and solar energy to leave the Earth. Compared to space rockets, a space elevator appears to be much more efficient in terms of energy and only has to be set up once with a low maintenance cost. Another benefit of this new technology is that it may finally allow space travel to be made commercial and allow ordinary people to travel into space due to the reduced cost per use of space elevators. Although travelling into space through the use of space elevators would be much slower, over time, this alternative would not only save money, it would also cause less harm to the environment. It has also been estimated that space

elevators are capable of carrying the same amount of cargo as rockets can with just 0.01% of the cost. As the climber descends, there may also be ways to regain a significant amount of energy. The introduction of space elevators may also be the first step towards space colonization. For frequent travel into space and to other planets, rockets are simply too expensive. Even if the technology was further developed, only a few would be able to travel through rockets due to the inefficient costs, making travel into space extremely difficult. In comparison, space elevators may be a more effective solution. Although space elevators come with a high initial cost, their frequent use allows each journey to be reasonably priced. This could allow more trips into space and enable humans to obtain materials from other planets or even inhabit them.

The Disadvantages Of Space Elevators The space elevator is still a work in progress, making advancement in this area difficult. The materials needed, such as the carbon nanotubes are slower in development and may need a much longer time to be developed to a stage where they can be used. To make the construction possible the carbon nanotubes must be produced at a large enough scale as a large quantity would be needed for the cable. Aside from its use in space elevators, carbon nanotubes are also going to be in high demand in other sectors. Due to their strength and other properties, they can be used to increase conductivity and to

act as structural reinforcements. Carbon nanotubes also see future use in batteries, capacitors, and many other fields. Although a space elevator project would help push the development of carbon nanotubes, the many different potential uses for it in other sectors may result in a demand too high even after further development and increased production of carbon nanotubes. This might drive its price up and delay the construction of the space elevator. If the cost is driven too high, there would be a lack of reason to switch from the already existing and relatively well-developed rockets.


PHYSICS AND TECHNOLOGY

How it would be built and is this feasible in the future?

99


100

PHYSICS AND TECHNOLOGY

As the cable will need to extend around 100,000 km out from the surface of the earth, an incredibly strong material will be required due to the overwhelming gravitational and centrifugal forces it will need to overcome. This has always been one of the major challenges in building the space elevator. However, materials required to build the space elevator have become more accessible, making its construction possible. As aforementioned, one of the leading candidates has been carbon nanotubes, surpassing the required 63 gigapascals of strength needed to withstand the tension. Another way of increasing the strength of a cable would be to increase the size of the base. A good example of this would be the Burj Khalifa, which has a large base to support its height. However, the larger the base gets, the less economical the idea would become; covering large areas of land that can otherwise be used for other purposes would be incredibly expensive and impractical. Seeking stronger materials and having them produced in larger quantities, therefore, seems much more viable. As estimated by the Obayashi Corporation, the construction of the cable would take around 20 years and the entire system will cost around 100 billion USD. However scientists such as Brad Edwards, an aeronautical engineer, believe that it is possible for the cost to be dramatically reduced in the future. In 2003 he published his final report predicting that in 15 years time the space elevator would only cost $15 billion USD. There are also many other predictions that all vary. While the predictions might not all be accurate, they all agree that the costs will drastically decrease over time and that it will only be a matter of time until space elevators become cheap enough to be considered cost efficient. The improvements in technology would eventually make construction easier and materials more available, therefore causing the price to become lower.


PHYSICS AND TECHNOLOGY

Illustration by Estelle Chan

101


102

PHYSICS AND TECHNOLOGY

Conclusion The cable would have to be in a geosynchronous orbit so that it would be above the same spot and above earth. Otherwise, the cable would have to extend and be compressed as the distance changes, causing damage to the cable, which would be catastrophic. The idea is for the construction to start in space, and the cable would slowly be built down to earth. However, as the cable is being built, it would shift the centre of gravity downwards towards earth, causing the elevator to leave the geosynchronous orbit. To keep the centre of gravity stable, a counterweight will have to be built in the opposite direction at the same rate the cable is being constructed. This counterweight can be later used as a platform for many operations as well as an area for storage, making it multifunctional, serving purposes besides being a counterweight.

So, will space elevators be able to replace rockets? Maybe, but it is not guaranteed to happen. Although space elevators are believed to be much more efficient and effective, a lot of work has already been done to invent and develop rockets, so people might not welcome the change. However, as more time passes and more technological advances are made, the initial cost of a space elevator will get even lower and will seem even more feasible and attractive. Over any period of time, there is always change. No one can guarantee what the future holds; but even if the space elevator is not the final answer, there will likely be something that replaces rockets, whether it be space elevator or a different invention.


PHYSICS AND TECHNOLOGY

Bibliography

103

Hoffman, C. (2004, April 12). Space Elevator. Popular Science. https://www.popsci.com/ scitech/article/2004-04/space-elevator/ Vanstone, L. (2015, November 6). It’s not rocket science: we need a better way to get to space. The Conversation. https://theconversation.com/its-not-rocket-science-we-needa-better-way-to-get-to-space-45751 Edwards, S. A., PhD. (2012, June 25). The space elevator. American Association for the Advancement of Science. https://www.aaas.org/space-elevator Feltman, R. (2013, March 7). Why Don’t We Have Space Elevators? Popular Mechanics. https://www.popularmechanics.com/space/a8814/why-dont-we-have-spaceelevators-15185070/ Peterkin, Z. (2018, August 1). Space Elevator Technology and Graphene. AZoM.Com. https://www.azom.com/article.aspx?ArticleID=16371 Ishikawa, Y. & Obayashi Corporation. (2013, October). The Space Elevator Construction Concept. Sixth SPS Symposium. https://dev.sspss.jp/symposium/sym16/SL2.pdf Swan, C. W. (2015). Introduction to Space Elevators and New Space. New Space, 3(4), 211–212. https://doi.org/10.1089/space.2015.0026


104

PHYSICS AND TECHNOLOGY


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.