1
WELCOME to the Fourth Edition of the Scientific Harrovian With the departure of Mrs Lotje Smith in December, the fate of the Scientific Harrovian was left somewhat up in the air. This magazine is an invaluable opportunity for pupils to communicate their views on Science and the wider world, research and report on current developments, and also present findings from their own projects. As a passionate Science teacher, I could not let it disappear. Many talented pupils have worked on the production of this fourth edition of the Scientific Harrovian and I have been able to rely on them for the writing, editing and design. A remarkable mixture of collaboration and independence has enabled this project to be turned around in a very short period of time, during which internal assessments and external examinations have also taken place, and I cannot thank the pupils enough. Although the pupils involved in this project have an obvious interest in Science, they also have various other skills and passions. Likewise, scientists do not exist in isolation in the real world. I recently met a geneticist who is now working for the US government as a graphic designer, producing infographics that enable the general public and those in government to better understand the science behind public-health policy. Another time I chatted to an evolutionary biologist studying marine animal adaptations in response to current environmental changes. As the head of a new lab, her first tasks were to design and set up the laboratory itself: fixtures, fittings and equipment, down to where exactly she wanted the electrical sockets. The ability to communicate and collaborate with specialists in different areas is essential. To be a successful teacher, excellent subject knowledge and understanding will only go so far… I believe that transferable skills, the ability to make links between fields, and a willingness to work in a wide range of roles will be even more valued in the future. This concept of interconnectedness inspired the theme of ‘Science And…’; in the second section of the magazine pupils have looked into some of the most recent developments in the application of Science. The third section includes reports from Extended Project Qualifications, in which pupils explain their investigation into a topic that interests them. Although they may not have had the time to refine their scientific methods or draw definitive conclusions, the projects still required extensive time and effort for preliminary reading, active research, and consideration of the results. It reminds us that commitment and diligence, which may not lead to obvious rewards in the short term, is of utmost importance in scientific research. I am proud to be the successor as Staff Editor of the Scientific Harrovian, and it has been a great privilege to work with pupils whose work continues to meet the highest of expectations. To conclude, Valerie Tsang (Year 12, Gellhorn) selected the quote for the back cover: “The work may be hard, and the discipline severe; but the interest never fails, and great is the privilege of achievement.” This is something well-worth remembering in any of our endeavours. Eva Cliffe Teacher of Chemistry, Head of Scholars & Assistant House Mistress
2
CONTRIBUTORS
Valerie Tsang
Year 12, Gellhorn Editing & Graphic Design
Dora Gan
Stephenie Chen Year 11, Gellhorn Author & Editing
Alissa Lui
Year 12, Keller Author & Editing
Justin Chan Year 13, Peel Author
Callum Sanders
Year 8, Parks Editing & Graphic Design
Ayuka Kitaura
Year 12, Gellhorn Author
Year 8, Shackleton Author & Graphic Design
Rocco Jiang
Hebe Cheuk
Year 10, Wu Graphic Design
Eunice Lam
Year 12, Churchill Author & Editing
Year 12, Gellhorn Author & Editing
Katrina Tse
Abraham Yeung
Glory Kuk
Yin En Tan
Year 12, Keller Author & Editing
Year 13, Keller Author
3
Year 8, Shackleton Editing
Year 13, Sun Author
Amy Wood
Year 13, Keller Author
CONTENTS SCIENCE AND... Science and Art: ‘And’ not ‘Or’ Eunice Lam Science and the Law: Can Genes be Blamed for Crime? Stephenie Chen Science and Faith: Can we Apply Each to the Other? Mrs Eva Cliffe
6 8 18
APPLICATION OF SCIENCE Reconstructing Fragrances from Extinct Plants Alissa Lui Algorithms in Daily Life Rocco Jiang Can Driverless Cars Be Integrated Into Society? Ayuka Kitaura Superbugs and Antibiotics Callum Sanders Ins and Outs of Robotics in the Medical Field Katrina Tse
22 24 31 35 38
EXPERIMENTAL To What Extent Does a Person’s Genetics or Environment Influence their Likelihood of Having Perfect Pitch? Amy Wood An Investigation Into Designing A Low Cost and Biodegradable Sanitary Napkin for Women in Rural China Glory Kuk To What Extent Do Individual Musical Elements Contribute Towards Emotional Response? Justin Chan Making a Successful Railgun Yin En Tan
46 56 73 87
ABOUT THE SCIENTIFIC HARROVIAN The Scientific Harrovian is the Science Department magazine which allows scientific articles of high standard to be published annually. In addition, the Scientific Harrovian is a platform for more experienced pupils to guide authors and to develop skills to help them prepare for life in higher education and beyond. Copyright notice Copyright © 2019 by The Scientific Harrovian. All rights reserved. No part of this book or any portion thereof may be reproduced or used in any manner whatsoever without the express written permission of the publisher, except for the use of brief quotations in a book review. Joining the Team Is there something scientific that excites you and you’d like to it share with others? Will you commit to mentoring budding Science writers? Do you have graphic design skills? Our team may have just the spot for you. Email the current Staff Editor to apply for a position or for article guidelines. 4
5
Science and Art: ‘And’ not ‘Or’ Eunice Lam (Year 12, Gellhorn) INTRODUCTION University applications are just around the corner. Just like for many, indecisiveness arises in the face of career choices. Over the years I was often given the label of an ‘art’ person or a ‘science’ person. But as someone who is heavily intrigued by both fields, my multi-faceted interests are often eyebrow-raising, or at the very least, surprising to others. One of the most primitive, innate human desires is to understand and interpret the world around us. This fuels the underlying motivation of a scientist’s and an artist’s work, explaining the demand for both of these roles. It is a mistake in our culture to separate the two. Without the two fields coexisting, many innovations we rely on a daily basis would not exist. HOW DO THEY DIFFER? Before delving into this subject any further, it is crucial to clarify what science and art mean. Science is concerned with the theories humans use to explain observable and repeatable phenomena in the natural and social world. A prolonged process of measurement and data collection provides evidence and allow conclusions to be drawn. Meanwhile, art is the physical representation of an idea, most commonly through visual and sonic presentation, to allow people to interpret and connect in their own unique way. Art is fundamentally subjective and provides an experience often leading to visceral responses compared with science, which is more grounded and objective. APPLICATIONS OF ART IN SCIENCE There are ample uses of art in science, but a particularly striking case I have come across recently involves the use of colour to assist in the visualisation of the patterns of polynomial roots. Mathematician Sam Derbyshire used different colours combined with the help of Java programs to make a high-resolution plot of all roots of all polynomials with degree less or equal to 24 which have coefficients of 1 or -1. Using a spectrum of colours, from black to dark red to yellow to white, the frequency of an occurring root is utilized to showcase these algebraic patterns:
Sam Derbyshire’s artistic take on polynomial roots
6
Close-up on a section
SCIENTIFIC EXPLANATIONS FOR ART The underlying ideas of science in forms of art are often overlooked, for example, the craft of musical instruments and theories heavily relies on physics, while traces of mathematics remain in architectural designs throughout history. The Fundamentals of Music The theories behind a musical instrument can be explained by the phenomenon of standing waves. In a pipe, air molecules oscillate when blown. This results in a stationary pattern of maximum and minimum amplitude, called ‘standing waves’. It is explained by the ideas of acoustic impedance and how waves reflect. The frequency of the sound produced can be determined by the different pipe lengths, and harmony in music can be explained by the relationship between the repeating wave patterns from different musical notes. Golden Ratio in Architecture The Fibonacci sequence, named after Italian mathematician Leonardo Bigollo, is a string of numbers where each of them is the sum of the two numbers that precede it, starting from 0 and 1. The ratios between two consecutive Fibonacci numbers tend to a value known as the golden ratio (φ). This mathematically “perfect” ratio has been evident in many measurements in ancient architectural designs from the Parthenon in Athens to the Pyramid of Giza in Egypt.
Fibonacci Sequence Ratios tend towards the Golden Ratio of 1.61803…
The Golden Ratio found in Pyramids
CONCLUSION “To develop a complete mind: Study the science of art; study the art of science. Learn how to see. Realize that everything connects with everything else.” Leonardo Da Vinci (1452 – 1519) Artistic approaches to scientific laws allow people to interpret discoveries in different ways, whilst scientific fundamentals are prevalent in most artistic creations. The intertwining nature of the two makes any dichotomy between science and art unnatural. In recent years there has been a resurgence of the idea of such coherence, long after Da Vinci spoke the words above, and changes have been made in our educational systems around the world. We are stepping away from the traditional curriculum of STEM, short for “Science, Technology, Engineering, Mathematics” into STEAM, with the A for “Art”. As for Hong Kong, Ocean Park is holding its first international STEAM conference and competition for both teachers and pupils this year. At last we are progressing into an era where “Science or Art” will soon be replaced by “Science and Art”. BIBLIOGRAPHY http://math.ucr.edu/home/baez/roots/ https://www.onlinemathlearning.com/fibonacci-numbers.html https://www.goldennumber.net/great-pyramid-giza-complex-golden-ratio/ https://link.springer.com/article/10.1007/s00004-014-0193-9
7
Science and the Law: Can Genes be Blamed for Crime? Stephenie Chen (Year 11, Gelhorn)
INTRODUCTION Nature or nurture? This has been a fundamental moral question throughout history, and continues to be a pressing question today. The presence of behavioural genetics in the courtroom embodies this question and has resulted in severe implications for criminal law, further blurring the line between guilt and innocence. Scientists have long suspected a link between criminology and biological determinism, with the idea beginning with the study of phrenology and continuing to be explored with new developments in genetic research. One gene that has sparked particular interest encodes for an enzyme called monoamine oxidase A, and is called MAOA. A defect in this gene has been found to result in increased violence in response to provocation. However, this specific gene has proven to be controversial within the scientific community, with geneticists arguing both for and against its use in courtrooms, and with different beliefs of what its effect should be on court rulings. Some argue that the presence of the defective gene proves the criminal wasn’t entirely acting upon free will, hence should not be punished as severely due to the genes they were born with. Others believe that the presence of the gene is evidence that the criminal’s inherent biology cannot be changed, therefore the criminal is more likely to commit crimes again and deserves a longer sentence. There is no single correct answer surrounding this issue, but it is undeniable that this gene has an impact on court rulings, with the sentence given in some previous cases being affected by testing positive for a defective MAOA gene. This article will focus on the influence of genetics on law, and how it could potentially alter our understanding of free will. HISTORY OF BLAMING INNATE HUMAN CHARACTERISTICS FOR CRIME 1.1 PHRENOLOGY Phrenology is a now-discredited pseudoscience that was believed to explain behaviour and influenced many beliefs regarding criminality in the 19th and early 20th century. This pseudoscience was one of the first that incited doubt surrounding the issue of whether criminals truly have the choice to commit a crime, and if not, whether they should be punished for it. It used to be widely believed that phrenology had the capability to explain behaviour, much like the modern notion that genes have some role in determining a person’s behaviour.
8
Phrenology is based on the idea that brain development shapes skull development, and since human behaviour originates in the brain, skull physiology can therefore determine someone’s behaviour (Fig.1). Furthermore, it was believed that different areas of the brain controlled different characteristics, including specific areas that correspond to the tendency to commit criminal acts such as murder and thievery. A projection or depression would signify a strength or weakness in a particular area, and these bumps were believed to be able to determine whether someone was destined to act a certain way [1].
Figure 1: An example of a phrenological chart depicting the link between the area of the cranium and the behaviour it affected. (Source: Wikimedia Commons)
1.2 RESEARCH BASED ON PHRENOLOGY Cesare Lombroso was a 19th century Italian legal scholar, criminal anthropologist and scientific racist (someone who thinks that racism can be justified using science) who believed that criminals were ‘degenerates’: people who had undergone evolutionary regression [2]. In 1870, he examined the cranium of a brigand (a gang member who operates in forests and mountains) and found a depression where he expected a projection (due to the cerebral falx), which led him to research the significance of cranial anatomy in criminality, writing that “...like a large plain beneath an infinite horizon, the problem of the nature of the delinquent was illuminated which reproduced in our time the characteristics of primitive man right down to the carnivores.” [2] He believed this observation supported his ‘degeneration’ theory, and was proof that it was in the inherent nature of the criminal to commit crimes. Furthermore, the skull had features typical of lemurs and some rodents, which he viewed as evidence to support his theory that the criminal had a biological link to these inferior animals.
Figure 2: Cesare Lombroso (Source: Wikimedia Commons)
He conducted research in prisons and mental asylums, mapping cranial shapes and sizes and concluded by stating that there was a clear pattern between cranial anatomy and criminality, as he believed all the skulls he studied showed signs of degeneration [3]. However, he faced criticism for his sample group, as he only studied prisoners to draw conclusions about crime, and did not compare their skulls with those of non-criminals. He also believed that tattoos (Fig. 3) were a symbol as to whether a person had a degenerated brain, as he argued that only people with degenerated brains would subject themselves to the pain of a tattooist’s needle [4].
9
Lombroso’s theories resulted in important consequences for the law as they ‘proved’ that criminality is a form of psychiatric disorder, and that it was not free will that led to a criminal act, but innate biology that renders the person unable to control his criminality. He argued that his theories showed that criminal punishment is for the protection of society as those who committed criminal acts were born criminals, with no hope of reform.
Figure 3: One of the tattooed criminals Lombroso examined. (Source: live.staticflickr.com)
1.3. INFLUENCE OF PHRENOLOGY On October 24, 1953, Patrick O’Connor and Henry Bradley were hanged for bush-ranging. After their deaths, their skulls were examined, and their shape convinced the scientists examining them that they were innately killers. From this examination, they decided that O’Connor was a “violent murderous man” and Bradley was, essentially, a “criminal idiot”. However, the scientists also concluded that O’Connor and Bradley should have been treated in a mental facility rather than hanged, as it wasn’t their fault they committed these crimes - their free will had no choice in the matter [4]. The opposing beliefs of these scientists and Lombroso illustrate two different viewpoints that are commonly held in modern society regarding the implications of having natural violent tendencies, and highlight how disagreement in the past was not so different from the present. 1.4 GENES The idea that there could be a genetic explanation for violence began in the late 1970s when a woman sought help from a hospital in the Netherlands to explain abnormally violent tendencies in the men in her family, which were present as far back as 1870. One man tried to rape his sister, another tried to run over his boss with a car, and one forced his sisters to undress at knifepoint. Geneticists searched painstakingly for the origins of this disorder, drawing clues from the fact that only men were affected, and that the affected men all seemed to have a relatively low IQ. This suggested that the disorder was due to an X-chromosome defect, which men are more vulnerable to as they only have one X-chromosome (Fig.4), whereas women have two X-chromosomes and therefore do not experience the effects as long as the second chromosome is normal. However, they can carry the gene for the defect and their children can inherit it. Furthermore, X-linked mental retardation, which could also cause aggressive behaviour, was already well known at that point [5]. 10
Figure 4: X and Y Chromosome (Source: Jonathan Bailey, National Human Genome Research Institute)
By 1998, as the men in the family continued to be violent, the woman sought help again. With the advancements in science by this point, geneticists were able to use genetic linkage analysis to detect the location of the relevant genes. Genetic linkage analysis is based on the observation that genes which are physically close on a chromosome remain close during meiosis. Scientists could then find a stretch of DNA always inherited in affected people and not in the unaffected, using it as a genetic marker to locate the defective gene on the chromosome [6]. In this case, the geneticists found a marker on the short arm of the X-chromosome near a gene that codes for monoamine oxidase A (MAO-A), an enzyme that breaks down three neurotransmitters (Fig. 5). The team tested the urine of the violent men in the family and found excessive levels of all three neurotransmitters and low levels of breakdown products, thus concluding that the violence and aggression was a behavioural phenotype due to MAO-A deficiency. The effects of this MAO-A deficiency were coined as Brunner syndrome, after one of the lead geneticists on this case. SCIENCE BEHIND THE THEORY 2.1 WHAT IS MAOA? Monoamine oxidase A (MAO-A) is the enzyme encoded by the MAOA gene, and it is a key regulator for normal brain function. “Monoamine oxidases are enzymes that are involved in the breakdown of neurotransmitters such as serotonin and dopamine and are therefore capable of influencing feelings, mood, and behaviour of individuals� [7]. These neurotransmitters are in a class called monoamines, which need to be destroyed before they can be reused, and MAO-A breaks down these neurotransmitters after usage.
Figure 5: Location of the MAOA gene on the short arm of the X-chromosome (Source: Wikimedia Commons)
MAO-A has been shown to play an important role in human behaviour in previous studies, including a study where transgenic mice with a deletion of the MAOA gene showed changes in behaviour such as trembling and increased aggression in adult males, and the previously mentioned study by Brunner, which showed that mild mental retardation and impulsive aggression in the males of the Dutch family were associated with a mutation in the MAOA gene [8]. 11
Different forms of the gene result in different levels of enzyme activity: the high activity form (MAOA-H) produces more of the enzyme, whereas the low activity form (MAOA-L) produces less of the enzyme [9]. There are 5 different variants of the repeat sequence: 2R (with 2 repeats), 3R, 3.5R, 4R, and 5R. The most common, ‘normal’ variant of the gene is MAOA-4R, which has 4 repeats and results in high activity breakdown of monoamines [10]. In a study, 3.5R and 4R were found to be more active (transcripted more efficiently) than 2R, 3R, and 5R, hence categorizing MAOA-3.5R and MAOA-4R into MAOA-H, with 2R, 3R and 5R being alleles classified as MAOA-L, with MAOA-2R being found to result in the lowest level of activity [11]. 2.2 THE “WARRIOR GENE” MAOA-5R, MAOA-3R and MAOA-2R are variants of the MAOA-L gene which produce less MAO-A enzyme. MAOA-L doesn’t break down the monoamines as efficiently, hence leading to those neurotransmitters staying in the brain for longer, creating differences in behaviour. The MAOA-L gene has therefore been dubbed the ‘Warrior Gene’.
Figure 6: Effect of genes and childhood maltreatment on anti-social behaviour [14]
Researchers have found a correlation between elevated levels of one of these monoamines, dopamine, and impulsive aggression [12], which may be the link between MAOA-L and violent behaviour. MAOA-L is also linked to an underactive prefrontal cortex (the area that inhibits antisocial impulses), but was found to be associated with antisocial behaviour only in European Americans who were sexually or physically abused as children (Fig. 6). This is strong evidence highlighting the importance of ‘nurture’ in determining future characteristics, which cannot be solely attributed to ‘nature’ [13]. MAOA-3R was the first gene associated with antisocial characteristics, but MAOA-2R was later found to result in more extreme characteristics, being named the “extreme warrior gene” and leading scientists to research further into whether there was a genetic basis for criminal activity [14]. 2.3 DOUBT SURROUNDING LINK BETWEEN MAOA AND CRIMINALITY There seems to be compelling evidence for MAOA being linked to aggression and therefore criminality, and for the 2R allele having a greater effect than the 3R. However, the 3R variant of the MAOA gene is actually found in approximately 56% of Maori males, 56% of African American males, 34% of European males, 61% of Taiwanese males, and 56% of Chinese males [14], suggesting that most people with the 3R mutation are actually law-abiding citizens. Some people have taken advantage of these findings to assert that some races were more violent than others (which is scientific racism), an example of how scientific evidence can be warped to suit a constructed narrative. 12
Also, much of this evidence uses uneven sample groups as the mutated alleles are not distributed evenly across all ethnic groups, with the 2R allele found in only 5.5% of African American men, 0.9% of Caucasian men, and 0.000675% of Asian men. Since the sample groups used to investigate the link between MAOA-2R and aggression are differently sized, the experiment is not as reliable as it could be [14]. Furthermore, the aggression could be mostly due to environmental circumstances. MAOA researchers in China concluded that the genes were found to play “an almost negligible role in aggressive behaviour compared to environmental factors such as poor social support, physical abuse, and instability at home”. A paper in 2003 suggested that violence associated with the defective MAOA gene was worsened if the perpetrator was sexually abused as a child, but abused children without the defective gene would be less likely to be criminals, showing that the presence of the MAOA-L gene could potentially result in a drastically different outcome to certain events than expected [15]. LEGAL IMPLICATIONS 3.1 WHAT DETERMINES THE SENTENCE FOR A CRIME? Most crimes have a range of appropriate punishments stated in constitutions/statutes, and the judge, who ultimately determines the length of a sentence, can ‘aggravate’ or ‘mitigate’ punishment within the spectrum based on a variety of factors, including the jury verdict [16]. 3.2 POSSIBLE LEGAL DEFENCES There are a few possible legal defences based on the presence of a defective MAOA gene in a criminal [17]: Arguing full defence would alleviate the criminal of the crime entirely, and with the MAOA argument, the defence of ‘insanity’ could possibly be used to achieve this. The argument for insanity is that the criminal did not know the nature of the crime was wrong, due to an error in their reasoning. However, this is a weak argument for MAOA, as although it could be argued that the gene results in impulsive aggression hence the criminal wouldn’t have time to think fully about the act before committing it, it is unlikely that the criminal didn’t know what they were doing was wrong, as there is no evidence to support this. It is more likely that defence lawyers will try for partial defence, which, if the criminal had killed someone, could change the conviction from murder to manslaughter. Although these sound similar, manslaughter is an “unlawful killing that doesn’t involve malice aforethought”, meaning that they lack intent to seriously harm or kill. Manslaughter would result in less moral blame than first or second degree murder, hence would result in a less severe punishment. [18]
13
Diminished responsibility could potentially be argued: if the person were suffering from an “abnormality of mind” that “impaired his mental responsibility for his acts and omissions in doing or being a party to the killing”, [19] he would not be convicted of murder. In this case, people with lower MAOA activity could be argued to not have full mental responsibility for the murder due to the influence of their genes, hence would receive a lesser sentence. Furthermore, provocation is a possible defence, as being convicted requires that the person has the same level of self-control an ‘ordinary’ person of the same sex and age would have. Since MAOA-L makes people have a natural tendency to be aggressive in response to being provoked, they wouldn’t have the same level of self-control an ‘ordinary’ person would have. However, it could be argued that since a significant percentage of the population do have MAOA-L, the self-control that all the people with this defective gene have constitutes an ‘ordinary’ level of self-control, especially since most of these people aren’t criminals. PREVIOUS CASES There have been previous cases where the defence argument has proven successful, resulting in this debate over whether or not the MAOA gene can be blamed for crime. 4.1 BRADLEY WALDROUP Bradley Waldroup shot his wife’s friend (Leslie Bradshaw) eight times and sliced her head open with a sharp object [20], then attacked his wife (who survived) with a machete, cutting off her finger. Waldroup was charged with the felony murder of Bradshaw, which warrants the death penalty, and attempted first-degree murder of his wife [13]. In this case, his defence lawyer argued ‘diminished responsibility’ (see 3.2) when he found that Waldroup had the highest risk version of the MAOA gene through genetic testing. A forensic psychiatrist responsible for Waldroup’s this testing testified in court for the defence, citing the combination of the high-risk MAOA gene and child abuse as increasing one’s chance of being convicted of a violent offence by over 400% [20]. When combined with child abuse, the defective MAOA gene became a risk factor, making Waldroup more vulnerable to committing violent behaviour as an adult. Some of the jury were persuaded by the gene argument, with Debby Beaty stating “A diagnosis is a diagnosis, it’s there. A bad gene is a bad gene” and “Some people without this would react totally differently than he would”. After 11 hours, the jury decided Waldroup would be convicted of voluntary manslaughter and attempted second degree murder, saving him from the death penalty and sentencing him to 32 years in prison instead, as they decided he was “unable to engage in the reflection and judgement necessary to premeditate the crimes” [15]. The main factor in this case was that the jury was swayed by the gene argument, but another common theme is the presence of an abusive history, which may have influenced both Waldroup’s aggression and the jury decision, but it is indisputable that the defective MAOA gene had an impact on this case. 4.2 ABDELMALEK BAYOUT Abdelmalek Bayout, an Algerian living in Italy, admitted in 2007 to stabbing and killing Walter Felipe Novoa Perez, who, according to Bayout, insulted him over his kohl eye makeup. The defence lawyer stated that the client may have been mentally ill, requesting three psychiatric reports that confirmed that Bayout did have a mental illness. This led the judge to agree that his mental illness was a mitigating factor, thus sentencing Bayout to 9 years and 2 months [21]. 14
However, after the defence identified Bayout as having abnormalities in 5 genes linked to violent behaviour and having low levels of MAOA, they decided to appeal to the court again with the argument that his genes would make him “particularly prone to be aggressive under stressful circumstances” [22]. The judges found the MAOA evidence especially compelling, reducing the sentence from 9 to 8 years. This was the first time behavioural genetics had affected the decision of a European court. Although Bayout was shown to suffer from a mental illness, it was still his genes that led the judge to reduce his sentence, as “he was even more vulnerable because of that” [22]. This particular case illustrates how judges may rule in the defendant’s favour if it was shown he had less control over his actions, and that his free will was somewhat limited by his genes. Despite the sentence reduction, the judge’s usage of “even more” shows that Bayout was already vulnerable in the first place due to his mental illness, and the effectiveness of the gene argument may simply be due to this pre-established vulnerability. 4.3 STEFANIA ALBERTANI In 2009, Albertani pled guilty to killing her sister, burning her corpse, and attempting to kill her parents. Following the defence argument, her sentence was reduced from life to 20 years. Through genetic testing, psychiatric analysis and Voxel based morphometry (to identify differences in brain anatomy) [23], the defence showed that she wasn’t in full control of her actions. She tested positive for low MAO-A activity and was shown to have a different brain composition in the Anterior Cingulate Gyrus (Fig. 7, 8), which reduces inhibition, and in the insula areas, which causes aggressive behaviour [24]. This is the second time MAOA evidence has been used in an Italian court in two years, after Bayout’s case. A similarity between the three aforementioned cases is that the defendants in all of them are afflicted with some other factor, whether it be childhood abuse or a mental disorder, with the argument not based purely on a defective MAOA gene.
Figure 7: Anterior Cingulate Gyrus [25]
Figure 7: Anterior Cingulate Gyrus Geoff B Hall, Wikimedia Commons
15
CONCLUSION As it is their job, lawyers will use anything available to them to attempt to win a case and defend their client. Sometimes the gene argument is effective, and sometimes is isn’t; lawyers will attempt to use the argument anyway. However, this argument has its weaknesses: as previously stated, prosecutors could demand a tougher sentence, arguing that the criminal is inherently bad. Also, this argument has many critics, with some fearing that the cases could lead to genetic determinism being accepted in criminal cases, potentially reducing sentences or even absolving the criminal of their actions when they have clearly committed the crime. Furthermore, there is no direct link between having the gene and committing a crime, as there is a high percentage of people with the defective gene who are not criminals, hence the criminal’s response in the situation seems to be partly due to the criminal’s free will too, even with the gene playing a part in how the criminal responds to situations. Often there is another factor which, when coupled with low levels of MAO-A, makes the criminal more prone to losing control and responding with violence. This raises the question of whether the gene argument or the abuse/mental disorder argument, both of which are not the criminal’s own choice and affect regular, non-criminal citizens, is more compelling? A study stated previously that cited people who were sexually abused as children, with the defective MAOA gene, are more likely to become criminals than those without the gene (see 2.3), showing that the effect of genes and environment may have a much larger impact when combined. In the near future, especially in this era of rapid scientific discovery, other genes may come into play, further increasing the need for an answer to whether genetic determinism is valid in a courtroom. This would also raise a plethora of moral questions, as genetic determinism would blur the lines between whether it was the responsibility of the criminal’s free will and own actions or their genes? If the latter, should they be punished for their genes? How big an impact do genes have on free will? If genes were blamed solely for crime, it would result in the conclusion that humans do not have free will and are predetermined to act a certain way, a notion that many people are uncomfortable with. However, as seen in previous cases, genes can play a role in reducing a sentence due to the criminal not having complete control over their actions, a belief that will both be challenged and developed upon in the years to come, as the realm of genetic knowledge increases and the line between guilty and not guilty becomes increasingly blurred.
16
BIBLIOGRAPHY [1] “Biological Theories of Crime - Criminal Justice - iResearchNet.” https://criminal-justice.iresearchnet.com/criminology/theories/biological-theories-of-crime/5/ [2] Mazzarello, Paolo. “Cesare Lombroso: an anthropologist between evolution and degeneration”. NCBI, 2011. https:// www.ncbi.nlm.nih.gov/pmc/articles/PMC3814446/ [3] Seigel, Micol. “Looking for psychopaths in all the wrong places: fMRI in court.” 10 Jun. 2014, http://theconversation. com/looking-for-psychopaths-in-all-the-wrong-places-fmri-in-court-27591 [4] Bradley, James. “Natural born killers: brain shape, behaviour and the history of ....” 12 Jun. 2014, http://theconversation.com/natural-born-killers-brain-shape-behaviour-and-the-history-of-phrenology-27518 [5] Richardson, Sarah. “A Violence in the Blood”. Discover Magazine, Oct. 1993, http://discovermagazine.com/1993/oct/ aviolenceinthebl293 [6] Pulst, Stephen M. “Genetic Linkage Analysis”. JAMA Neurology, Jun. 1999, https://jamanetwork.com/journals/jamaneurology/fullarticle/775035 [7] Hook, G. Raumati. ““Warrior genes” and the disease of being Mäori”. MAI Review, 2009. http://review.mai.ac.nz/MR/ article/download/222/222-1507-1-PB.pdf [8] Sabol, S., Hu, S. & Hamer, D. “A functional polymorphism in the monoamine oxidase A gene promoter”. Hum Genet (1998) 103: 273-279. https://doi.org/10.1007/s004390050816 [9] Brown University. “‘Warrior Gene’ Predicts Aggressive Behaviour After Provocation.” ScienceDaily. ScienceDaily, 23 January 2009. https://www.sciencedaily.com/releases/2009/01/090121093343.htm [10] Oubré, Alondra. “The Extreme Warrior gene: a reality check”. Scientia Salon. 31 Jul. 2014, https://scientiasalon.wordpress.com/2014/07/31/the-extreme-warrior-gene-a-reality-check/ [11] Guo, G., Ou, X. M., Roettger, M., & Shih, J. C. (2008). “The VNTR 2 repeat in MAOA and delinquent behavior in adolescence and young adulthood: associations and MAOA promoter activity”. European journal of human genetics : EJHG, 16(5), 626–634. doi:10.1038/sj.ejhg.5201999 https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2922855/ [12] Seo, D., Patrick, C. J., Kennealy, P. J. “Role of Serotonin and Dopamine System Interactions in the Neurobiology of Impulsive Aggression and its Comorbidity with other Clinical Disorders”. Aggression and violent behaviour vol. 13,5 (2008): 383-395. doi:10.1016/j.avb.2008.06.003 https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2612120/ [13] Barber, Nigel. “Pity the poor murderer, his genes made him do it”. Psychology Today. 13 Jul. 2010, https://www. psychologytoday.com/us/blog/the-human-beast/201007/pity-the-poor-murderer-his-genes-made-him-do-it [14] Hovet, Kristen. “Chasing the ‘warrior gene’ and why it looks like a dud so far’. Genetic Literacy Project. 20 Feb. 2018, https://geneticliteracyproject.org/2018/02/20/chasing-warrior-gene-looks-like-dud-far/ [15] “Why we can’t blame “warrior genes” for violent crime”. New Statesman. 16 Sep. 2016, https://www.newstatesman. com/2016/09/why-we-can-t-blame-warrior-genes-violent-crime [16] “Factors Considered in Determining Sentences - Criminal Law” FindLaw. https://criminal.findlaw.com/criminal-procedure/factors-considered-in-determining-sentences.html [17] Baum, Matthew L. “The Monoamine Oxidase A (MAOA) Genetic Predisposition to Impulsive Violence: Is It Relevant to Criminal Trials?” Antonio Casella. 2011. DOI 10.1007/s12152-011-9108-6 http://www.antoniocasella.eu/nume/ Baum_2011.pdf [18] Berman, Sara J. “What Is Manslaughter? What Is Murder vs. Manslaughter?” Nolo. https://www.nolo.com/legal-encyclopedia/homicide-murder-manslaughter-32637-2.html [19] Padfield, N. 2008. Criminal Law, 6th ed. New York: Oxford University Press [20] “Can Your Genes Make You Murder?” NPR. 1 Jul. 2010, https://www.npr.org/templates/story/story.php?storyId=128043329 [21] Feresin, Emiliano. “Lighter sentence for murderer with ‘bad genes’” Nature. 30 Oct. 2009. doi:10.1038/ news.2009.1050 https://www.nature.com/news/2009/091030/full/news.2009.1050.html [22] Forzano, F., P. Borry, A. Cambon-Thomsen, S.V. Hodgson, A. Tibben, P. De Vries, C. Van El, and M. Cornel. 2010. “Italian appeal court: A genetic predisposition to commit murder”. European Journal of Human Genetics 18(5): 519– 521. [23] “Voxel-Based Morphometry - an overview” ScienceDirect Topics. https://www.sciencedirect.com/topics/neuroscience/ voxel-based-morphometry [24] Feresin, Emiliano. “Italian court reduces murder sentence based on neuroimaging data”. Nature Blogs. 1 Sep. 2011, http://blogs.nature.com/news/2011/09/italian_court_reduces_murder_s.html [25] Bernard J. Baars, Nicole M. Gage. Fundamentals of Cognitive Neuroscience. 2013 Elsevier Inc.
17
Science and Faith: Can we Apply Each to the Other? Mrs Eva Cliffe Based on an assembly presented on 18th March 2019 As a Science teacher, I often hear things like “Scientists can’t believe in God” or “Faith and Science aren’t compatible”, “Science is about facts, but faith isn’t”. Of course, some scientists don’t have a faith, but there are also scientists who do. In fact, my view that the world is an absolutely amazing place, full of incredible things, fuels both my love of Science and my Faith. Please note: I’m am talking about faith, not religion. Obviously, this is a massive debate that we can’t even begin to cover in a short article, but I got to thinking: is it possible to see some similarities between science and faith? DEFINITIONS To address this, first of all, we need some working definitions. Merriam Webster dictionary defines Science as ‘the state of knowing’ or ‘knowledge or a system of knowledge covering general truths or the operation of general laws, especially as obtained and tested through the scientific method’. In contrast, definitions for faith include: ‘complete trust’, ‘a firm belief in something for which there is no proof’ or ‘something that is believed especially with strong conviction’ [1]. In the recently performed School musical ‘The Wiz’, which is based on the famous childhood story of The Wizard of Oz, Dorothy helps demonstrate what faith is. Addaperle, the Good Witch of the North, tells her that if she wants to get home to Kansas, she needs to go to see the Wiz in the Emerald City. But then Addaperle disappears, and the Munchkins tell Dorothy to ‘ease on down’ the Yellow Brick Road. Dorothy then sings the song ‘As soon as I get Home’ in which she expresses her uncertainty about the world she’s in, but puts her faith into the Munchkins and the Yellow Brick Road. Dorothy’s faith is rewarded when she follows the Yellow Brick Road and makes it to the Emerald City, through the requisite trials and temptations and collection of friends along the way. Now I would like consider whether some aspects of Science can be applied to faith, and whether faith is important in Science.
ESO/B. Tafreshi (twanight.org)
18
APPLYING THE SCIENTIFIC METHOD TO FAITH The scientific method can be described in the following steps: 1. Recognition and formulation of a problem, or question, which could be as simple as: I notice that balls bounce. Does the height from which I drop a ball affect the height of its bounce? 2. Collection of data through observation and experiment: I drop a ball from different heights and measure the height it bounces, and draw a graph 3. Formulation and testing of hypotheses: from my results I can make some theories and predictions such as: the higher the ball is held, the higher the bounce. Then every time someone bounces a ball, the theory is being tested and can either be evidence for, or against the theory.
@Greywind on plot.ly
Now that we know what the Scientific Method is, can we apply this method to support faith? 1. Recognition and formulation of a problem: I observe that millions of people around the world believe in a God that they pray to for miracles. Does prayer affect whether a miracle happens? 2. Collection of data through observation and experiment: Aaron Shust is a Christian musician whose son Michael was born with Down Syndrome, which resulted in a wide range of health issues, including serious problems with his hearing. In a video of his story, Aaron explains that his wife took his son to a prayer meeting on a Friday just before an important set of tests: “We had multiple tests all throughout the past year and they always came back with the results of flatline hearing… no hearing showing up in the tests whatsoever in both ears. And she said that during the prayer up at the altar [Michael] began to cry, and she wasn’t quite sure why. Of course, we had our hopes, but maybe he was tired, any number of things, maybe he was hungry… The next morning my wife calls me… crying, saying that she called Michael by name as he was playing on the hotel room floor, and he turned his head, for the first time in 14 months acknowledging her calling his name, and we started to have this ray of hope that may be he could hear… and we got clinical proof on Monday… the graph… that showed spikes and peaks, and valleys and mountains: a full range of perfect hearing.”[2] 3. Formulation and testing of hypotheses: after collecting more data, with similar results, I can make some theories and predictions, such as prayers result in miracles. Then every time someone prays to God for a miracle, the theory is being tested. 19
As a Scientist, I know that there are flaws to this analogy; I know that correlation does not imply causation, it could all be coincidence, confirmation bias, etc. And “What about all those times when people pray for miracles and nothing happens?” But also “If God is omnipresent, omnipotent and omniscient” is any of this valid anyway?! Well, this is more than a lifetime’s worth of debate, but my main point is that you could consider this as evidence, using the scientific method, for the existence of a God who listens to prayer and does miracles. APPLYING FAITH TO SCIENCE Now, let’s move on to how faith contributes to science. As your Chemistry teacher, I’m sure you have complete trust in me (you have faith in me) when I tell you that there are sub-atomic particles in an atom called electrons. But, have you actually seen electrons? I haven’t, but I still believe that they exist! There is plenty of evidence for the existence of electrons such as the vapour trails formed in a cloud chamber or the fluorescence produced in cathoderay-tubes (which is how old TV and computer monitors work), and of course electricity!
Power lines
So, we believe in the existence of electrons due to their effects on the world, and by believing they are there, it enables us to explain many phenomena. Isn’t this similar to believing in the existence of God due His effects on the world? And if you believe He is there, it enables us to explain many phenomena? Another way in which Science needs faith, is when scientific knowledge develops. Scientists need to have faith in their interpretation of data and theories, even if others disagree. A classic example is when Mendeleev formulated his Periodic Table in 1871. He was convinced that there must be more elements that hadn’t yet been discovered that would fit the patterns that he had seen, and he left spaces for them. Other scientists dismissed this idea but he was proven to be correct when gallium and germanium were discovered, fitting perfectly into two of the missing spaces. Today, we have this system of elements because of his faith in his work. Also, remember that theories evolve in Science. Each time we put our faith in a new theory, there is always the potential for that theory to be challenged by new evidence. CONCLUSION So, maybe you can apply some science to faith, and faith is important in Science. I am sure there are plenty of arguments you can think of around this topic and I encourage you to discuss them with others in an open-minded and respectful way, bearing in mind that faith can be a very personal thing. On a final note, another perspective is that Science and Faith are completely different things and there is no conflict between them simply because they answer different questions. Science is about HOW? And faith is about WHY? Even if we understood everything about how the universe works, why does it even bother to exist in the first place? BIBLIOGRAPHY [1] Merriam Webster https://www.merriam-webster.com/dictionary/science, https://www.merriam-webster.com/dictionary/ faith [2] YouTube “Aaron Shust shares Michael’s miracle story” https://www.youtube.com/watch?v=mVRy-t3S0ts
20
21
Reconstructing Fragrances from Extinct Plants: Science Fiction Turned Reality? Alissa Lui (Year 12, Keller)
Many may recall the revival of dinosaurs in the hit film, Jurassic Park: DNA was extracted from the blood of fossilised insects trapped in amber, this was followed by DNA reconstruction and synthesis, and then this DNA was injected into ostrich eggs [1]. This concept was made into a reality when Ginkgo Bioworks, a Boston-based synthetic biology company, decided to replicate the distinctive scent of the endemic Hawaiian mountain hibiscus (H. wilderianus), which most likely vanished from the forests of Maui in 1912.
To begin with, perfumes contain terpenes, aromatic organic compounds found in plant oils. Limonene, for example, is a naturally-occurring terpene found in citrus peels used as a fragrance or a flavouring [2]. In this context, however, the terpenes responsible for H. wilderianus’s piney, earthy fragrance is identified and constructed. On May 2016, Ginkgo’s creative director Christina Agapakis, and colleagues [6], went on to identify terpene-making genes from extinct plants. They looked at the Hawaiian mountain hibiscus in particular, as well as the Falls-of-the-Ohio scurfpea and the Wynberg conebush, all of which became extinct withClipping of H. Wilderianus in the last 200 years. They did so by taking plant samples from Board of Agriculture & Forestry, Hawaii Harvard University Herbaria and Botanical Museum, the home to millions of dried plant specimens. However, because DNA breaks down over time, the process being accelerated by exposure to heat, water, and sunlight, paleogenomicists have to piece together the DNA fragments. It is estimated that the half-life of DNA is 521 years, so the more recently extinct the species, the easier it is to reconstruct its DNA [5]. The plant samples were sent to Beth Shapiro, a paleogenomicist at the University of California, Santa Cruz. By using similar DNA sequences as a starting point, Shapiro could genetically assemble the genes that code for the enzymes that make up the flower’s fragrance molecules, done through trial-and-error in an organised approach. Gaps found in the sequence of the extinct plants were filled using existing plant DNA, resulting in the construction of approximately 2000 genes in total [1]. A DNA synthesiser is used to produce the DNA sequences, which are then inserted into yeast* cells. The yeast cells are then able to produce the terpenes encoded by the inserted DNA and a mass spectrometer is used to analyse the terpene molecules expressed by the genes. The terpene profiles were sent to Sissel Tolas, an olfactory artist who mixed and matched the identified molecules to create 11 possible combinations. The engineered yeast cells responsible for making the chosen terpenes were grown in fermentation vats for mass production [1]. This isn’t new to the company, however. Before their project on “de-extincting plants” to make perfume, Ginkgo also used yeast to make rose oil through the similar method of inserting the gene coding for rose oil production. This industry is explanding rapidly; for example Gingko have foundries specialising in different areas, including synthesising genes [9] and engineering mammalian cells for pharmaceutical research and manufacturing [8]. By enhancing device performance, products are being developed at a rapid rate, enabling both large and small pharmaceutical companies to benefit from advanced technology at a reasonable price [8]. 22
*Why Yeast? Though the use of yeast dates back many centuries, it is only now that yeast is recognised as a biotechnological tool. It has a high productive rate, it is easy to work with, but most importantly, yeast is more similar to human cells than we may think. To start off, both cells are eukaryotic: the DNA chromosomes are trapped within the nucleus. Next, many metabolic and cellular pathways seen in humans are also observable in yeast. For example, the same genes responsible for regular cell division in yeast are found mutated in human cancer cells, and at least 20% of human genes contributing to diseases are also found in yeast cells, yeast can also be used for drug testing and research on human genetics. Though the revival of the scent of extinct plants was only recently achieved, manufacturing flavours and aromas using yeast was not. Various brands like Tom Ford’s Patchouli Absolu uses patchouli fragrance in genetically modified yeast made by the biotech company Amyris [3]. As well as reconstructing the genes for fragrances, scientists have extracted DNA from Neanderthal remains and expressed them in monkey cells to help us further understand the hair and skin pigmentation in our common ancestors, and woolly mammoth genes have been inserted into human cells to study how the mammoth withstood the extreme temperatures of the Ice Age. Researchers at IBM are using artificial intelligence to mix and match compounds to create fragrances that humans have never thought of, and it is predicted that AI-developed perfumes will hit the market this year. However, Agapakis’ goal at Gingko Bioworks takes a different approach: instead of competing commercially with designer perfume brands, Agapakis aims to express the great potential of synthetic biology through art, as well as raising awareness of appreciating the plants that we have lost [1]. BIBLIOGRAPHY [1] Dolgin, Elie. “Jurassic Park for Perfume: Ginkgo Bioworks Reconstructs Scents From Extinct Plants.” IEEE Spectrum: Technology, Engineering, and Science News, IEEE Spectrum, 1 Nov. 2018 https://spectrum.ieee.org/the-human-os/at-work/ start-ups/jurassic-park-but-for-perfume-ginkgo-bioworks-reconstructs-scents-from-extinct-plants [2] “Limonene: Uses, Side Effects, Interactions, Dosage, and Warning.” WebMD, WebMD, www.webmd.com/vitamins/ai/ ingredientmono-1105/limonene [3] Brouillette, Monique, and Monique Brouillette. “Would You Feel Sexy Wearing Eau De Extinction?” MIT Technology Review, MIT Technology Review, 6 Dec. 2016, www.technologyreview.com/s/602899/would-you-feel-sexy-wearing-eaude-extinction/ [4] “Synthetic Biology Used to De-Extinct Plants to Manufacture Perfume.” Healthinnovations, 10 Nov. 2018, health-innovations.org/2018/11/09/synthethic-biology-used-to-de-extinct-plants-to-manufacture-perfume/ [5] Rettner, Rachael. “Boston Strangler Case: How Long Does DNA Last?” LiveScience, Purch, 12 July 2013, www.livescience.com/38150-dna-degradation-rate.html [6] Chakravarti, Deboki. “Resurrecting the Genes of Extinct Plants.” Scientific American, 18 Jan. 2019, www.scientificamerican.com/video/resurrecting-the-genes-of-extinct-plants/ [7] Molteni, Megan. “Ginkgo Bioworks Is Turning Human Cells Into On-Demand Factories.” Wired, Conde Nast, 24 Oct. 2018, www.wired.com/story/ginkgo-bioworks-is-turning-human-cells-into-on-demand-factories/ [8] Bioworks, Ginkgo. “Ginkgo Bioworks Opens Bioworks4, Adds New Tools for the Rapid Genetic Engineering of Mammalian Cells.” PR Newswire: Press Release Distribution, Targeting, Monitoring and Marketing, 24 Oct. 2018, www.prnewswire.com/news-releases/ginkgo-bioworks-opens-bioworks4-adds-new-tools-for-the-rapid-genetic-engineering-of-mammalian-cells-300736726.html [9] “Foundries.” Ginkgo Bioworks, www.ginkgobioworks.com/foundries/ [10] www.prnewswire.com/news-releases/ginkgo-bioworks-opens-bioworks4-adds-new-tools-for-the-rapid-genetic-engineering-of-mammalian-cells-300736726.html
23
Algorithms in Daily Life Rocco Jiang (Year 12, Churchill)
Source: The Financial Times: Audit the Algorithms that are Ruling our Lives
INTRODUCTION What comes to mind when you hear the word ‘algorithm’? Most people, when confronted with this question, would answer with something to do with technology or coding. In a sense, they would be correct. Algorithms are what make computers work; without them, computers would be useless, unable to complete any tasks. They are the instructions that allow computer systems to solve a variety of complicated problems, faster than any human could. And they are everywhere in modern life, from Netflix suggestions to financial markets. However, in its simplest terms, an algorithm is just a collection of simple instructions for carrying out a particular task in finite time [1]. An algorithm takes an input, does something to it, and provides an output. We take advantage of the computer’s ability to perform billions of operations per second to run algorithms at lightning speed, but algorithms don’t need to have anything to do with computer code. In fact, the concept of algorithms has existed for millennia; some of the first algorithms were used by the Ancient Babylonians around 1600 BCE, for purposes such as finding square roots and calculating compound interest [2, 3]. In fact, you use algorithms yourself every day, whether you know it or not. If you think about our earlier definition of an algorithm, you could also call it a ‘procedure’ or even a ‘recipe’ [1]. Yes, even the completely ordinary task of cooking something from a recipe is you following an algorithm. You take your ingredients (input), follow the instructions, and produce a delicious meal (output)! A cooking recipe is nowhere near as complicated as the complex algorithms that power Google or Facebook, but it is an algorithm nonetheless. This article aims to acquaint you with algorithmic thinking by introducing a range of simple algorithms in the context of everyday problems. The algorithms that will be introduced are simple – some may even be so intuitive that you already use them without thinking about it – but they still hold great importance in the computer systems that allow us to do things you’d never think twice about, such as accessing the Internet. Furthermore, different algorithms used to approach the same task will be compared, to see how efficient they are relative to each other. After all, speed is a very important factor to consider when implementing an algorithm – you wouldn’t want to use an algorithm that takes an hour to complete if there exists an alternative that only takes a minute. 24
TIME COMPLEXITY Time complexity describes the amount of time it takes to run an algorithm. However, we don’t measure this in seconds or minutes, because that’s not what we’re interested in. An algorithm’s running time will be different depending on the size of its input. For example, it would obviously take less time to find a specific item out of 100 items than, say, a million items! What we are interested in is how different algorithm running times increase at different rates when the input size increases. Instead of using seconds, time complexity is measured using Big O notation, which shows how quickly the runtime of an algorithm increases as the size of the input increases [4].
Figure 1: Common Big O runtimes
Big O notation is written with a ‘big O’ and some function of n within brackets, for example O(n2). SEARCHING Imagine that you are playing a simple game. An integer from 1 to 16 is randomly chosen and you need to guess which number it is. After every guess, you are told if your guess was too high or too low. How would you approach this? A very simple approach would be to guess 1, then 2, then 3, and so on. This method, known as linear search, is very easy to use, as you simply guess all the numbers in sequence [5]. However, it should be obvious that it is not the best approach, since the worst-case scenario would be that you take 16 guesses, if the number was 16.
Figure 2: Worst-case scenario of using linear search in the guessing game.
An alternative approach that you most likely would have taken is to start off by guessing at the midpoint, 8. If 8 is too high, you can eliminate all numbers from 8 to 16; if 8 is too low, you can eliminate all numbers from 1 to 8. With one guess, you will be able to eliminate half of the numbers. Each guess further eliminates half of the remaining numbers, until you achieve the correct number. Because of this, to find the worst-case scenario, we just need to find how many times you have to divide 16 by 2 in order to get to 1 – if there’s one number left it has to be the correct one. Therefore, the worst-case scenario would only require log216 = 4 guesses. This method, called binary search, is a much more efficient method to solve this problem as every guess halves the range of further guesses [6]. 25
Figure 3: Worst-case scenario of using binary search in the guessing game.
Now imagine applying binary search to a much larger list of items. Imagine trying to find one user out of the 2.3 billion users on Facebook (as of December 2018). Using linear search, your worst-case scenario would be having to check every single 2.3 billion users! However, if you used binary search, the first user you check would already eliminate the need to check over 1 billion other users. Therefore, you’d only need a maximum of log2 (2.3×109) = 32 guesses. This shows how binary search becomes much more efficient as the number of items in the list increases. When expressed in Big O notation, the time complexity of linear search is O(n), also known as linear time. This means that the running time increases linearly with the size of the input, i.e. the runtime is directly proportional to the input size. You could be lucky and find the correct item in your first try, however statistically this is not likely to happen and this is why we usually use Big O to either express the worst-case or average-case runtime. We have already established that at most, binary search requires searching through log2 n items for a list of n items. Therefore binary search is said to have a time complexity of O(log n), known as logarithmic time. An O(log n) algorithm is very efficient, An O(log n) algorithm is very efficient, since the properties of logarithms means that as the number of inputs increases, the running time will increase, but the rate of increase actually slows down very quickly.
Figure 4: Big O runtimes of linear and binary search
SORTING You may have noticed that there is one requirement of binary search – the list must be sorted [6]. You cannot run binary search on an unsorted list, as it relies on knowing if your value is higher or lower than the target value. You would have to use linear search if you had an unsorted list – this means that at you are limited to O(n) time. Therefore, if you often need to search through a list, it may be useful to use a sorting algorithm to sort it, so that you can use the much more efficient binary search on it.
26
Imagine having to sort a hand of playing cards. Most people would probably use the following approach: 1. Take the second card 2. If it is smaller than the first card, move it to the beginning 3. Take the third card and compare it with the first and second card, moving it to the correct location if needed 4. Repeat for the rest of the cards. This method is called insertion sort [7]. Figure 5 shows an example of insertion sort being applied on an unsorted list:
Figure 5: Example execution of an insertion sort [8]
An alternative sorting algorithm is called merge sort. It is what is called a divide and conquer algorithm [10], as it breaks down the problem into multiple sub-problems to solve. The idea of merge sort is as follows: 1. Split the list in half, resulting in 2 sublists 2. Repeat halving the sublists until you have n sublists, each containing only one item (after all, a list with one item is considered sorted) 3. For each pair of sublists (which at this point are only size 1), put the smaller item to the left and larger item to the right, producing sublists of size 2 4. Repeat this with each pair of sublists (which are now size 2) 5. Continue ‘merging’ the sublists to produce new sorted sublists until you are left with only one list, which will be sorted Figure 6 shows an example of merge sort being applied on an unsorted list:
Figure 6: Example execution of a merge sort [9]
27
Merge sort may seem unnecessarily complicated, and it is true that it would probably be impractical for humans to use. It would be much more intuitive for us to use insertion sort instead. However, this is different for a computer. Insertion sort has a time complexity of O(n2), also known as quadratic time, while merge sort is O(n log n). This means that merge sort is more efficient than insertion sort, especially for large lists. While easy to program, an insertion sort is usually only used in real-life applications when the expected size of lists is small. A merge sort handles larger lists much more easily but it is much more advanced and complex to implement.
Figure 7: Big O runtimes of insertion and merge sorts
MAZE SOLVING Imagine that you are trapped inside a maze, or one of Hong Kong’s sprawling malls! How would you find your way out? The most trivial (and inefficient) method would be to simply follow along the passages, making random turns until you find your way out. Although you would eventually escape, you can imagine that this approach can be extremely slow. Surprisingly, this unintelligent algorithm does actually have a name – the random mouse algorithm. However, surely there is a better method. One that is commonly known is the wall follower algorithm, also known as the left-hand or right-hand rule. You simply follow the wall with one hand, and you would eventually find your way out of the maze. This approach works because if you rearrange a maze’s walls, you usually can end up with a straight line. And so, if you think of the maze as a piece of string, it is obvious that walking from one end will eventually get you to the other.
Figure 8: A simple maze rearranged into a straight line
However, a flaw of the wall follower algorithm is that it doesn’t work on all mazes. If the maze has ‘loops’ inside of it – walls that are not connected to the outer wall, this algorithm fails. The following figure shows how you would be stuck running around in circles if you started following a wall in a simple loop:
Figure 9: Examples of the wall follower algorithm
28
If a loop was much larger and more complicated, it would be hard to realise that you are following one and you would never leave the maze. Luckily, there is another algorithm that doesn’t have this weakness – Trémaux’s algorithm. A simple version of the algorithm was actually described in the Greek myth of Theseus and the Minotaur. In the tale, the architect Daedalus built a labyrinth to contain the Minotaur, a ferocious monster. Theseus, who was to be fed to the Minotaur, was given a plan by Ariadne, the daughter of the King, to escape the labyrinth. She sent for Daedalus and told him he must show her a way to get out of the Labyrinth, and she sent for Theseus and told him she would bring about his escape if he would promise to take her back to Athens and marry her. She gave him the clue she had got from Daedalus, a ball of thread which he was to fasten at one end to the inside of the door and unwind as he went on. This he did and, certain that he could retrace his steps whenever he chose, he walked boldly into the maze looking for the Minotaur. [13]” In more algorithmic terms, Trémaux’s algorithm can be described as follows: 1. 2. 3. 4. 5.
Draw a line to record the path you take (Theseus used the thread to mark this) At a junction, if an unmarked path exists, take it Or else, take the path marked once Never take a path marked twice At a dead end, turn around
This method makes use of backtracking, so whenever you hit a dead end you can try a different path. The ability to go back to intersections and try better routes guarantees that you will find your way out, no matter if the maze has loops or not. In Figure 10, the green arrows show a decision to take a new path when you meet a marked path (step 2 of our algorithm):
Figure 10: Example of a maze solved using Trémaux’s algorithm [14]
These three maze-solving algorithms covered are useful for a person inside of a maze. There are other methods that may be faster and guarantee a shortest path out, however they require us to have a full bird’s eye view of the maze. These include dead-end filling and shortest path algorithms such as breadth-first search, Dijkstra’s algorithm, and the A* search algorithm. Those who have done D1 in Further Mathematics may intuitively recognise that the paths in any maze (with no loops) can be pulled and stretched out to resemble a tree in graph theory.
Figure 11: Stretching out a maze to resemble a tree [15]
29
Maze-solving may seem to have little application to daily life, however it turns out that the idea of getting from one point to another in a constrained environment (like a maze) is very important in many applications that we use every day. In graph theory, an alternative way of describing a maze would be to have the passages as edges and intersections as nodes. A website that knows how roads are connected – such as Google Maps – can tell you the best way to get from your home to school, and an app that knows your friends – Facebook or Snapchat, for example – can better guess who else you may know, based on the connections between your mutual friends. Even robot vacuums are a good example, as different brands and models use different algorithms. Cheaper robots may use something similar to the random mouse algorithm, simply roaming around, while more expensive models may first map out your room to determine where obstacles are and then move back and forth in a grid-like pattern. The maze-solving algorithms that are used in everyday life may not be as simple as the ones described here, but it is important to understand how something so seemingly trivial can be so important in real-world applications. CONCLUSION Algorithms are important in the computer applications that are everywhere in our lives; however, the concept has existed for millennia before computers were invented. I hope that after reading this article, you have gained a deeper knowledge of how algorithms work and how they apply to relatively simple everyday contexts. It is important to understand that each algorithm has its advantages and disadvantages, depending on the situation it is used in. Although it may seem obvious that you should use merge sort over insertion sort due to its higher efficiency, that is not always the case. Sometimes, it is better to use less efficient algorithms since more efficient ones are usually more complex and prone to bugs and errors: it is worth sacrificing some speed for reliability. In some cases, linear search is used instead of binary search because the lists or databases that require to be searched are not sorted; if items are constantly added or removed from the database, the time required to sort the items again and again for the sake of binary search may not be worth it. Algorithms are beautiful because of how they allow you to solve generalised problems. All you have to do is understand the instructions to tackle a type of problem, and you can solve any that comes your way. No matter what kind of list you are given, you can always sort it with merge sort. No matter what kind of maze you are given, you can always find a way out with Trémaux’s algorithm. These mathematical recipes allow computer systems to do amazing things, enabling Google to find relevant search results, Amazon to speedily deliver packages, and YouTube to recommend videos. As technology continues to dominate our daily lives, as games are played on a computer or tablet rather than with a deck of cards, and consumers are perhaps more likely to shop online than get lost in a shopping mall, algorithms will only become increasingly important in the everyday functioning of society. BIBLIOGRAPHY [1] M. Sipser, Introduction to the Theory of Computation, 2nd ed. Boston, MA: Thomson Course Technology, 2006, p. 154. [2] D. Knuth, “Ancient Babylonian algorithms”, Communications of the ACM, vol. 15, no. 7, pp. 671-677, 1972. Available: 10.1145/361454.361514. [3] D. Fowler and E. Robson, “Square Root Approximations in Old Babylonian Mathematics: YBC 7289 in Context”, Historia Mathematica, vol. 25, no. 4, pp. 366-378, 1998. Available: 10.1006/hmat.1998.2209. [4] A. Mohr, Quantum Computing in Complexity Theory and Theory of Computation. 2007. [5] D. Knuth, “6.1 Sequential Searching,” in The Art of Computer Programming, vol. 3: Sorting and Searching, 2nd ed. Reading, MA: Addison-Wesley, 1998, pp. 396-408. [6] E. Weisstein, “Binary Search”, MathWorld. [Online]. Available: http://mathworld.wolfram.com/BinarySearch.html.
30
Can Driverless Cars Be Integrated Into Society? Ayuka Kitaura (Year 12, Gellhorn) http://fortune.com/2016/02/15/driverless-cars-google-lyft/
INTRODUCTION The recent exponential developments in technologies have made it possible to introduce driverless cars to society in the near future. In fact, this idea is already in development by well-known car manufacturers such as Lexus, Mercedes, and BMW; there are rumours that Apple is working along with BMW to produce a possibly driverless car [1]; Tesla has even tested their cars on the UK roads recently [2]. Google has also invested in this idea with the Waymo Self-Driving Car Project, which has also been road-tested. The concept of driverless cars is said to have first appeared back in 1940, displayed in a diorama by General Motors [3]. The predecessors of driverless cars, so-called intelligent vehicles, are already common today. There are different levels of autonomy of cars: Level 0 (all major systems being controlled by a human) to Level 5 (the car completely capable of driving in all situations) [4]. Typical modern cars are at Level 2, with some technically advanced intelligent vehicles at Level 3. Nowadays, engineers are looking into Level 4 where vehicles can drive themselves, unless in the case of an emergency which requires some human control. There is no doubt that autonomous cars on the roads are just around the corner. Statistics show that 94 to 96% of car accidents are due to human errors [5]; therefore, driverless cars that would not make mistakes would undoubtedly reduce the number of accidents. However, concerning the driverless car’s significant influence on society, many debates regarding its impacts and issues are taking place. HOW DO DRIVERLESS CARS WORK? Both driverless cars and intelligent vehicles require the following information: the position, kinematic and dynamic states of the vehicle, its environmental surroundings, the state of the driver and passengers, communication with other cars, the roadside infrastructure (such as traffic lights), and access to digital maps and satellite data [6]. For the first three, cameras and radars are essential. Cameras are equipped on a car at all angles so a 360-degree view is provided to identify the external environment in order to make decisions. Today, it is more common to equip cars with 3D cameras as they produce images with more detail, allowing image sensors to detect surrounding objects, such as cars and pedestrians, more easily. However, more 31
improvement is required, as poor conditions, such as rain and fog, significantly decrease the clarity of images, increasing the risk of accidents. As well as cameras, two types of sensors are also used to detect the car’s surroundings: radar and lidar sensors. Radar sensors detect objects by sending out radio waves to collect distance and speed measurements. Unlike cameras, radar is not affected by the weather. However, current 2D radars have no ability to detect the height of objects; the development of 3D radar is expected to solve this problem to ensure safety. Similarly, lidar sensors are used for observation by using lasers instead of radio waves and are able to detect objects that are up to 10 metres away from the car (Fig.1). The difference is that not only do lidar sensors detect distance, but they also allow for the creation of 3D images of a 360-degree map surrounding the car, increasing the level of details significantly. The drawback is that lidar sensors cost significantly more than radar sensors. However, Waymo has recently managed to cut the cost by almost 90%, from $75,000 per single unit to $7,500 [7]. This has made the technology much more accessible and therefore an increase in the use of lidar sensors is expected. To run the algorithm for the fastest route to a destination, exclusive information of the road network is essential. With the aid of satellites, GPS is an extremely helpful tool to navigate the vehicle. However, it is far from perfect and lacks accuracy due to the number of satellites used, and the fact that GPS is not updated often, therefore, does not include all the roads. Driverless cars can definitely make use of GPS, but they still require the level of details that only 3D cameras and lidar sensors can provide by forming a 3D map for wherever they drive [8]. All in all, although there must be some improvements in order to maximise the safety and for a better quality of driving, it is definitely possible to produce driverless cars and let them drive on roads in the near future.
Figure 1: object detection with lidar technology
IMPACTS As many engineers believe, the appearance of driverless cars would make our lives much more convenient. Fewer people would be needed in transportations and logistics, therefore the society as a whole would improve in its productivity and further economic growth could be expected in many countries. However, if driverless cars are introduced, they would significantly affect many people worldwide, so engineers must take their impacts seriously before presenting this to the world.
32
EASING OR WORSENING OF CONGESTION With more people owning cars due to increased affluence in society, many countries suffer from heavy traffic congestion on their roads. It is especially serious in Southeast Asian countries such as Indonesia due to their undeveloped infrastructure and rapid economic growth. However, the true cause of traffic jams is often not the number of cars on the street. Traffic jams usually happen due to human drivers: when a driver hits their breaks and a driver behind reacts too aggressively, a chain reaction occurs which eventually forms huge traffic jams [9]. Unlike human-driven cars, driverless cars are able to travel at a constant speed and react intelligently and smoothly to a car in front as their reaction time is negligible. Also, if driverless cars become the norm, all cars will be interconnected by a network for more safety. This technology can be applied to reduce congestion as all cars know which areas are congested and form a route to avoid these areas. However, some argue that the appearance of driverless cars would make congestion worse. They argue that it would allow everyone, including children, the elderly, and disabled people, to travel on the roads without a ‘driver’. Although it would be more convenient for them, the number of cars on the streets would increase by at least 11% [10]. This is a significant number of cars considering how many cars there already are. The increased number of cars could potentially reach the maximum capacity of the road network and cause traffic jams everywhere, substantially affecting many people and causing a huge economic loss. LOGISTICS AND TRANSPORTATION High demand in online shopping is pressuring logistics systems in many countries, and the demand will only continue to grow despite the fact that the systems are close to their capacities. Many drivers have been driven to over-working, leading to tragic accidents due to sleep deprivation. With the presence of the driverless car, more trucks can be used to deliver goods without any drivers involved. This cuts down the cost of transportation and would enable the demand for online shopping to grow further. However, in contrast to these benefits, people who used to be drivers would suffer unemployment as the driverless car could render the ability to drive a ‘useless’ skill. Introducing driverless cars could considerably impact the lives of certain people, therefore solutions to help them must be in place. MORAL ISSUE Another factor to consider with introducing driverless car is the moral issue. As AI is in charge to make decisions, it might soon have to make ethical judgements in an emergency. A worldwide survey was carried out on people’s opinions in scenarios where someone’s death was inevitable [11]. In other words, people needed to weigh the value of an individual’s life compared to others. Researchers found that the decisions vary in different countries. For example, people from a group of countries showed a stronger preference for sacrificing older lives to save younger ones compared to other groups. This trend must be put into a pattern and a rule somehow in order for the driverless cars to make ethical and rational decisions by itself, and there will always be different opinions on what is ‘right’. INTERNET AND NETWORK In order for driverless cars to drive safely, transmitting and exchanging information with surrounding vehicles and infrastructure is essential (Fig. 2). However, the current 4G network is not capable of carrying all the necssary information, nor fast enough to transmit it in time [12]. Autonomous cars require much faster communication – transmitting information from the sensor to the computer and the computer to make a decision in less than 2 milliseconds. The fifth generation of wireless communication, 5G, is expected to strengthen the interconnection between cars with much faster speed and higher reliability. 33
Figure 2: Networks surrounding autonomous cars
ELIMINATION OF HUMAN-DRIVEN CARS It is important to note that some people simply enjoy driving cars and introducing driverless cars may lead to a ban on their hobby. With all cars interconnected in a single huge network, the presence of human-driven cars would simply raise the probability of accidents as humans are bound to make errors and sometimes drive irrationally. Therefore, this raises the issue of how human-driven cars will be managed if driverless cars become the norm. CONCLUSION For decades, the concept of driverless cars, as seen in sci-fi stories and films, has excited many in anticipation of the technological advancements that the future may hold. As explained in this article, driverless cars are no longer a thing of the distant future; the technology is just around the corner and being developed rapidly by many car manufacturers. The driverless car promises to bring fewer car accidents, less congestion, and more convenience. However, it will also change the structure of transportation systems significantly. Massive unemployment could occur and the increased number of cars on the roads could make congestion even worse. In spite of its benefits, has the potential to negatively impact many people’s lives, so we must tread carefully before we can introduce such a radical change in our society. BIBLIOGRAPHY [1] Driverless car of the future https://www.alphr.com/cars/1001329/driverless-cars-of-the-future-how-far-away-are-wefrom-autonomous-cars [2] Telegraph https://www.telegraph.co.uk/cars/features/how-do-driverless-cars-work/ [3] History of the autonomous car https://www.titlemax.com/resources/history-of-the-autonomous-car/ [4] Three types of autonomous vehicle sensors https://www.itransition.com/blog/three-types-of-autonomous-vehicle-sensors-in-self-driving-cars [5] Traffic safety facts https://crashstats.nhtsa.dot.gov/Api/Public/ViewPublication/812115 [6] Intelligent Vehicles https://www.researchgate.net/publication/225898443_Intelligent_Vehicles [7] LiDAR vs. RADAR https://www.sensorsmag.com/components/lidar-vs-radar [8] GPS tracking system for autonomous vehicles https://www.sciencedirect.com/science/article/pii/S1110016818301091 [9] How self-driving car or adaptive cruise control could ease traffic jams https://www.usatoday.com/story/money/2018/07/03/self-driving-reduces-traffic-jams-study-says/741985002/ [10] Influence of current nondrivers on the amount of travel and trip patterns with self-driving vehicles http://umich. edu/~umtriswt/PDF/UMTRI-2015-39.pdf [11] Moral Survey http://moralmachine.mit.edu/ [12] 5G’s important role in autonomous car technology https://www.machinedesign.com/motion-control/5g-s-important-role-autonomous-car-technology
34
Superbugs and Antibiotics Callum Sanders (Year 8, Shackleton)
NIAID, Micrograph of Methicillin-Resistant Staphylococcus Aureus (MRSA)
WHAT IS A SUPERBUG? Superbugs are bacteria which have developed resistance to antibiotics and they pose a severe problem. The reason this matters is because many infections cannot be treated by the usual antibiotics and it takes time to find a mixture of antibiotics which will kill the bacteria. As a result, there is a higher chance that the infection could be fatal. Figures of infection per year have risen to over 700,000 people worldwide in 2018 from superbugs, and research by the UK Review on Antimicrobial Resistance [1] predicted that if no action is taken to neutralise this growing threat, that number could potentially achieve over 10 million per year worldwide by 2050, with 30,000 of those deaths alone estimated to be from America alone. Antibiotics are very efficient at destroying bacteria but because of the way we use them, bacteria are developing immunity to antibiotics faster than we can create and discover new ones. ANTIBIOTICS Antibiotics usually fall into two categories: Bactericidal - These antibiotics kill the bacteria by disrupting its cell wall or cell contents. It can stop cell wall synthesis, inhibit bacterial enzymes and prevent protein translation. A common example of this type of antibiotic is Penicillin. A Penicillin molecule has the chemical formula R-C9H11N2O4S where R can be different carbon and hydrogen structures. These slightly different molecules are a group of β-lactam antibiotics (antibiotics which contain a β-lactam ring in their molecular structure) and were originally discovered to be produced by a blue-green mould (Penicillium). Its effects had been documented as early as 1897 however it was not accepted until 1928 when Sir Alexander Fleming reported it [2]. Bacteriostatic - These antibiotics kill the bacteria by stopping their reproduction, which allows a patient’s immune system time to fight the infection. This type of antibiotic usually interferes with protein production and DNA replication, however, it can sometimes inhibit other aspects of the cell. An example of these is Tetracyclines. A Tetracycline molecule has the chemical formula C22H24N2O8, and it originally came from bacteria of the Streptomyces type. Patented in 1953, Tetracycline has been sold under the brand name Sumycin and is on the World Health Organisation’s List of Essential Medicines. 35
WHY ARE SUPERBUGS GAINING RESISTANCE TO ANTIBIOTICS? The reason bacteria are gaining immunity to antibiotics is because of the way we use them. When antibiotics were first discovered, they were used to treat diseases that were deemed incurable and fatal, it was the ultimate tool to vanquish the worst infections and illnesses. However as time went on, we stopped using antibiotics only for serious cases and started using them for everything. This poses several problems. Firstly, antibiotics don’t just kill ‘bad’ cells, they destroy helpful bacteria that keep infections and inflammation at bay, gut bacteria that help in the processing of food and even bacteria that help maintain the immune system. Secondly, bacteria that survive the encounter with antibiotics can ‘edit’ their DNA to become resistant, and if not eradicated by the immune system, it can spread the immunity by transferring DNA to other bacterial cells, which allows them to become resistant. And if we are constantly using antibiotics for everything, the chances of some bacteria escaping and spreading immunity increases.
OUR FAILED CONTINGENCY PLAN Until recently we had a ‘last resort’ antibiotic used for superbugs called Colistin, a powerful drug to kill bacteria of the type Staphylococcus aureus [3]. However, strains of bacteria that have developed resistance to Colistin have appeared in pigs from China as farmers there had been using this drug for livestock in their country for year, and also from the increased use of Colistin because of the increasing rates of bacteria resistant to other antibiotics.
Many superbugs didn’t even develop resistance to antibiotics in humans, as animals in factory farming live in an environment which makes animals prone to infection and illness and unlike us, the animals don’t have health care, medical services or established hygiene; many ending up living in their own excrement. Much of our antibiotics is used for the meat industry: in the USA, 80% of all antibiotics in use are consumed by farm animals.
HOW RESISTANCE TO ANTIBIOTICS WORKS Bacteria can gain resistance to antibiotics through either transfer of DNA from bacteria that already have antibiotic resistance or through a change (mutation) in their DNA that allows them to survive more easily. For example. the genetic mutation can cause proteins to change shape so certain antibiotics will not bind to them or create enzymes to destroy antibiotics. One way of superbugs protecting themselves from antibiotics is through the production of β-Lactamases: enzymes which provide resistance to β-lactam antibiotics such as penicillins, cephalosporins, cephamycins and carbapenems by destroying the antibiotics when they enter the cell. These enzymes deactivate the β-lactam antibiotics by breaking the β-lactam peptide bond of the ring within the molecule, effectively neutralising its antibacterial properties [4]. We used Colistin to treat infections caused by some of the bacteria that produce β-lactamases. 36
Another way antibiotic resistance can be achieved is through efflux pumps, which are transportation proteins which remove toxic substances from a cell’s environment. Efflux pumps can be for one substrate or can actually pump a whole variety of structurally different compounds, including antibiotics. M. A. Webber & L. J. V. Piddock report that “It has been estimated that 5–10% of all bacterial genes are involved in transport and a large proportion of these encode efflux pumps” [5]. This means that a large part of bacterial DNA is dedicated to genes which provide a potential method of immunity. It is possible that efflux pumps were originally used by bacteria to remove harmful substances within the cell and now it is also being used to transport a whole range of the antibacterial compounds. However, resistant bacteria may use more than one way to resist the effect of antibiotics. SUPERBUGS - RECENT DEVELOPMENTS In 2018, The Japan Times reported that a Japanese hospital had 15 cases of patients infected with multi antibiotic resistant bacteria in 2016 [6]; eight of those patients died from their infections soon after. Recently, Melbourne researchers discovered a strain of Staphylococcus epidermidis bacteria which is resistant to all known antibiotics [7]. This is a common bacteria found on the skin, which is relatively safe in most cases, usually only going as far as to spread across the surface of the skin, and not leading to infectinon.This strain of bacteria was discovered to have developed in hospitals in Australia, prescription antibiotics are ineffective and treatment is difficult and complicated. These are just some cases of this very serious looming threat, if we don’t find a way to subdue these quickly evolving superbugs then we could have a massive epidemic of these resistant bacteria. A potential solution to this problem is bacteriophages, which are viruses that infect bacteria. Recently this method was used to treat a girl facing an antibiotic resistant bacteria [8]. However, this method is too specific: each bacteriophage will only attack a specific bacteria and its close variants. Doctors would have to pinpoint the exact strain of each type of superbug and search a bank of billions of different types of bacteriophages, before releasing it into the bloodstream and monitoring carefully to make sure it is effective. It may be a very difficult method to employ as viruses have to constantly be replenished due to short “life spans” (viruses aren’t considered ‘alive’ as they cannot survive outside a ‘host’ cell) and the sheer multitude of the inventory you would have to keep. In whatever way we may solve the problem of superbugs, we have to work swiftly as otherwise we may descend back into the times of the 1900s where the CDC estimates over 3-4 million people were infected by the measles virus alone each year [9]. However, it would not be an outbreak of measles but of an antibiotic resistant bacteria.
BIBLIOGRAPHY [1] Background | AMR Review. Amr-review.org https://amr-review.org/background.html [2] Overview of Antimicrobial Therapy | Boundless Microbiology. Courses.lumenlearning.com https://courses.lumenlearning.com/boundless-microbiology/chapter/overview-of-antimicrobial-therapy/ [3] Caniaux I, et al. MCR: modern colistin resistance - PubMed - NCBI. Medical Affairs, bioMérieux SA, 376, Chemin de l’Orme, 69280, Marcy L’Etoile, France. https://www.ncbi.nlm.nih.gov/pubmed/27873028 [4] Majiduddin FK et al 2002 Molecular analysis of Beta-Lactamase structure and function. Department of Biochemistry and Molecular Biology, Baylor College of Medicine, Houston, TX, USA https://www.ncbi.nlm.nih.gov/pubmed/12195735 [5] M. A. Webber, L. J. V. Piddock, The importance of efflux pumps in bacterial antibiotic resistance Journal of Antimicrobial Chemotherapy, Volume 51, Issue 1, January 2003, Pages 9–11 https://doi.org/10.1093/jac/dkg050 [6] Japantimes [7] ABC news | https://www.abc.net.au/news/2018-09-04/superbug-strains-resistant-to-all-known-antibiotics-discovered/10198590 [8] NPR Choice page | Npr.org | https://www.npr.org/sections/health-shots/2019/05/08/719650709/genetically-modified-viruses-help-save-a-patient-with-a-superbug-infection [9]. Measles | History of Measles | CDC | Cdc.gov https://www.cdc.gov/measles/about/history.html
37
Ins and Outs of Robotics in the Medical Field Katrina Tse (Year 12, Keller)
https://www.techworld.com/picture-gallery/startups/16-of-best-robotics-startups-in-uk-3654908/
INTRODUCTION Robots are widely used in the field of medicine today. They are integrated into the daily work of medical professionals so well that we almost forget that they are performing tasks in place of humans. According to a report by Credence Research [1], the global medical robotics market was valued at $7.24 billion USD in 2015 and is expected to be valued at over $20 billion USD by 2023. There are many driving factors for this rapid expansion in the market of medicinal robotics, highlighting the increasing importance of robotics in medicine nowadays, as well as the adaptability of robotics in this field. Capable of serving a wide range of roles in the medical field, these are the top six uses of robotics in medicine [2]: 1. Telepresence 2. Surgical Assistants 3. Rehabilitation Robots 4. Medical Transportation Robots 5. Sanitation and Disinfection Robots 6. Robotic Prescription Dispensing Systems In this article, we will delve deeper into the use of robots as surgical assistants and for rehabilitation, but also look at one other use of robotics in medicine which narrowly missed the list, but is far more common and accessible in a layperson’s daily life compared to any of the above six: robotics in nursing and care. ROBOTIC SURGICAL ASSISTANTS The 21st century has commonly seen robotic surgical assistants in the operating theatre across the globe. These can be largely divided into 4 categories, each of which assists human surgeons in a different way. These 4 categories are: Surgeon Waldo, Programmable Automata, Assistive Guide, and Motorised Laparoscopic Tools [3]. Surgeon Waldo, the most widely known category of surgical assistants, are machines which convert a surgeon’s movements into instrument movements. They overcome the physical constraints of human surgeons by having better precision, flexibility, endurance and strength. Robots are not bound by the 38
anatomical and physiological restrictions that limit humans. They can operate with maximum precision and reach areas that would be impossible for humans to do so otherwise. It also allows minimally invasive surgery which offers many benefits, including a faster recovery process and a much lower chance of infections compared to those who undergo laparoscopic(1) surgery involving far larger wounds [4]. One of the breakthrough developments in surgical assistants is the Da Vinci Surgical System [5]. In 2000, it became the first robotic-assisted surgical system to be approved by the Food and Drug Administration (FDA) for general laparoscopic surgery [6]. The system is built of 4 robotic arms, and is controlled by a surgeon sitting behind a console. As the surgeon is only capable of controlling two robotic arms simultaneously, attempts have been made to write autonomous programs which allow the remaining two arms to better assist the surgeon without any manual input. Currently, it is also one of the world’s most versatile robotic-assisted surgical systems, capable of assisting in 7 different types of surgery [7].
Figs. 1, 2: The Da Vinci Surgical System
Recently, Senhance (fig. 3), another robotic-assisted surgical system which earned the FDA’s approval in 2017, has made further breakthroughs. This system is capable of providing the surgeon sitting behind the console with haptic feedback through the system [8]. Touch is an imperative sense for any surgeon operating on any patient, doing any procedure, and the lack of it has always been a major downfall of robotic-assisted surgical systems. Another major breakthrough which this system has provided is its eye-tracking sensor. The system is capable of tracking the surgeon’s eye movements, adjusting the camera as the surgeon moves their eyes, without any manual input [9]. Soft robotics technologies could further facilitate development in this area by increasing flexibility and dexterity. Such developments are in progress for endoscopic procedures [10] and can very likely be extended to surgical assistants in the future.
Fig. 3. the Senhance Surgical System
Aside from having better precision and flexibility, robotic-assisted surgical systems also break the distance barrier - surgeons no longer have to be geographically near to the patient to conduct a procedure. This could help in urgent situations where the patient is located in a remote area and is unable to access medical care nearby. Recently, a surgeon performed a cardiovascular procedure of inserting a stent into the patient who was 20 miles away from the operating theatre, using the CorPath System [11]. The ‘Lindbergh Operation’ is another example of a tele-surgical operation: using the ZEUS Robotic Surgical System, a team of surgeons in New York, United States, carried out a laparoscopic cholecystectomy(2) on a patient located in Strasbourg, France [12]. However, it is worth 39
noting that there is one major limitation to remote surgical processes: it relies on high speed internet connections to prevent any lagging between the surgeon and the robot conducting the procedure on the patient, and often, there are worries with regards to a broken connection in the middle of the procedure. For this reason, there is almost always a qualified surgeon at the operating theatre so that they can step in if such an event occurs. Programmable automata are considerably restricted in their use for now. This type of surgical assistants is used only for stereotactic(3) surgery, which is generally only seen in tumour treatment. They function based upon a predefined treatment plan, calculating optimal positions and orientations to fire energy, targeting tumours at certain positions. As computers are capable of computing at much higher speeds and precisions than humans and can continually track the position of a tumour, treatment can be much more precise- causing less damage to the body in general and being more effective in treating the tumor - through using such a robot. One of the most well known examples is the Cyberknife system [13].
Fig. 4: the Cyberknife System
The limit of damage to healthy areas near the tumour is particularly crucial for cranial and spinal tumours, as damaging any healthy tissue and cells could bring tremendously undesirable consequences. With the system’s submillimetre accuracy, this can be minimised. Typically, conditions such as astrocytomas(4) and trigeminal neuralgias(5) have a better prognosis when treated with the likes of a cyberknife system [14]. This is because stereotactic surgery is non-invasive. As a result, it avoids many of the complications and risks posed by open surgery. The other 2 types of surgical assistants are generally less discussed as they are not as powerful. Assistive guides function to reinforce pre-surgical plans and ensure that surgeons do not deviate from it. This minimises the chances of non-optimal treatment being delivered. These robots are generally used for oral and orthopaedic implants. They ensure that human-initiated actions conform to plans made in the preoperative stage. However, the biggest difference between a programmable automata and an assistive guide is that in the case of assistive guides, humans are the final ones to execute the action instead of the robots. Motorised laparoscopic tools are a family of tools that bring added flexibility to the traditional straight laparoscopic tools. They also include the automation of laparoscopic cameras which allow them to be steered and controlled by the surgeon without a camera holding assistant.
Fig. 5. An example of a motorised laparoscopic tool
As seen above, a wide array of robotic surgical assistants are used to assist across a large spread of procedures these days. Having discussed what is used inside the operating theatre, It is important to remember that in the world of medicine, a lot goes on outside the operating theatre as well. One of the key things that can lead a patient back to living a normal life is rehabilitation.
40
REHABILIATION ROBOTS Rehabilitation robots are generally quite restricted in availability to the general population due to their extremely high price tag and perhaps more importantly, our lack of understanding in how the Central Nervous System (CNS) works to adapt after an injury, hence these robots are still in a very primitive phase of development. Currently, they are generally split into 2 categories: assistive robots and therapy robots [15].
Fig. 6: the Manus ARM mounted on a wheelchair
Assistive robots are generally substitutes for lost limb movements in the long term. Neurological conditions, especially strokes, are the leading cause of disability in the older sector of the population. Despite the advancements in post-stroke care, 80% of the patients still suffer from long-term reduced manual dexterity, 72% suffer from leg weakness, and 50% of patients with neurological conditions are incapable of performing daily tasks alone [16]. The Manus ARM is a wheelchair mounted robotic arm which is specifically geared towards the “pick and place” actions of our everyday life [17]. It can operate via many different programs, such as manual or voice control. The Manus ARM can help patients greatly in their day-to-day lives, rendering them capable of looking after themselves once again. Further development in this area is generally centered around implementing an object recognition algorithm in the robots and teaching them how to manipulate objects.
Therapy robots, on the other hand, have the end goal of retraining a patient to do certain actions alone. This is often done through repeatedly practising movements with the aid of the robot. Eventually, the robot can be removed and the patient will be able to carry out that action independently. Research in neuroscience has proven that the CNS is highly adaptable, even after injury, increasing the demand for this family of robots. The first robot to be used as a therapy robot was the MIT Manus. This robot provides bedside therapy and reportedly did not make patients feel uncomfortable during the process. During a therapy session, the person fixes their lower arm and wrist into a brace. The system then prompts the patient to perform an arm exercise, such as connecting the dots, through a video screen. If the patient’s hands do not move, the MIT Manus will move them for the patient. Throughout the process, the MIT Manus adjusts to provide a suitable level of support to facilitate the person’s arm movement and recovery process. The robot has proven to significantly improve a person’s recovery process compared to those who did not use it [18].
Fig. 7. The MIT Manus in use
However, if a person cannot make sufficient recovery and will require a long term caregiver, who - or what - can do this? 41
ROBOTICS IN NURSING AND CARE These systems are usually designed to aid patients in their day to day lives. An example that most people must know, albeit fictional, is Baymax from Big Hero 6. Although caregiver robots that exist in real life are not as powerful as Baymax, some robots are capable of lifting patients in and out of different places such as beds and wheelchairs. The Robear is one of Japan’s most recent developments. It is lighter compared to its predecessors, RIBA and RIBA II, and has multiple features which allow a more flexible and gentle touch for its user. One of these features is an actuator with a very low gear ratio, which allows the joints to move very quickly and precisely. The robot also incorporates three different types of sensors, including a Smart Rubber capacitance-type tactile sensor which allows for gentle movement even when performing power-intensive tasks such as lifting a patient [19]. This ensures that the patient is not harmed by the robot when being handled.
Fig. 8. The Robear in use
Fig. 9. A typical, tactile sensor in robots
Another development which would also significantly help those with weakened muscles is an electric-powered wearable mobility aid device which has sensors detecting whether the wearer is going uphill or downhill. When the device senses that the wearer is going uphill, a booster is automatically activated to help the patient; when it senses that the wearer is going downhill, brakes are applied to reduce the likelihood of the patient falling and injuring themselves. This device would reduce the need for a human caregiver to take patients out for walks and to look after them, and would bring an extra degree of freedom to a patient’s life as they would be able to go out whenever they wanted without fear of injury. Continually conducting research and leading developments in this specialty, Dr. Hirohisa Hirukawa, Director of Japan’s National Institute of Advanced Industrial Science and Technology, believes that robots are a viable solution to a lot of problems that patients face today [20]. Another useful aspect of robots is that they are not susceptible to making the errors that humans might, such as forgetting to remind a patient that it is time to take his or her medicine. Commands that have been programmed into a robot are far less likely to be incorrectly handled. They can also store crucial medical information about a patient which could be easily accessed in an emergency. As we all know, sometimes a matter of seconds can be the difference as to whether a patient lives or dies. Moreover, humans are often put to risk when carrying out these kinds of work. For example, cactions, such as repeatedly bending down to lift patients, can cause strain on the caregiver’s back. We can safely assume that robots would not be affected by these kinds of problems.
42
However, it is worth noting that there are some issues with using robots in place of human caregivers. One of the biggest issues is that humans are, by nature, social animals [21]. This means that we need interaction with other humans to stay happy and calm. Having robots in place of human nurses and caretakers poses a problem for the patient’s dignity and happiness. As a result, one of the major focuses of development in this area of robotics is to make them as human-like as possible, especially when interacting with the patient, and an ‘emotional processing unit’ was built in 2018 by Emoshape, which enables robots to react to human emotions [22].
Fig. 10: Emoshape’s emotional processing chip
The lack of manpower in the field of nursing and caregiving is the major drive to develop and implement robotic caretakers and nurses. Even though they cannot replace humans in all lines of work, they can certainly contribute and ease the burden of some workers. CONCLUSION As seen throughout the article, robotics has a very wide set of uses across the medical field. Robots are more precise, flexible and adaptable than humans, and can often outperform us in many aspects. The increasing push from both government and private sectors of the society to the industry to advancing technology in the medical field goes to show how crucial robotics is to the development of medicine. The best medical treatments can now often only be administered by robots, simply because we lack the kind of precision they have. From working in the pre-operative stage, to operating during surgical procedures and finally helping in the post-operative stage or long term care, robots are everywhere. They have enabled groundbreaking events to occur, and there are still many other types of robots that can be explored outside of this article. With robots, the possibilities are endless, and without a doubt, they will be an even more significant part of the medical field in the future. GLOSSARY 1. Laparoscopic: a minimally-invasive surgical or diagnostic procedure that uses a flexible endoscope (laparoscope) to view and operate on structures in the abdomen. 2. Cholecystectomy: the removal of the gallbladder. 3. Stereotactic: precisely directing the tip of a delicate instrument (such as a needle) or beam of radiation in three planes using coordinates provided by medical imaging in order to reach a specific point in the body 4. Astrocytoma: a tumour that stemmed from cells called astrocytes. These tumours often cannot be removed completely, and are generally found in the Central Nervous System. 5. Trigeminal Neuralgia: a chronic pain condition which affects the trigeminal nerve. This nerve is responsible for bringing sensations from your face to your brain.
43
BIBLIOGRAPHY [1] Credence Research. (n.d.). Medical Robotics Market By Product (Market Analysis, Surgical Robots, Rehabilitation Robots, Non-Invasive Radio surgery Robots, Hospital And Pharmacy Robots), By Application (Laparoscopy, Neurology, Orthopedic) - Growth, Share, Opportunities & Competitive Analysis, 2016 - 2023. Retrieved from https://www.credenceresearch.com/report/medical-robotics-market [2] Top 6 Robotic Applications in Medicine. (n.d.). Retrieved from https://www.asme.org/engineering-topics/articles/bioengineering/top-6-robotic-applications-in-medicine [3] Staff, R. (2019, March 01). How Robots and AI are Creating the 21st-Century Surgeon. Retrieved from https://www. roboticsbusinessreview.com/health-medical/how-robots-and-ai-are-creating-the-21st-century-surgeon/ [4] Robotic Surgical Assistants. (n.d.). Retrieved from https://harvardmagazine.com/1999/01/right.robot.html [5] Labios, L. (2017, February 10). Engineers developing advanced robotic systems that will become surgeon’s right hand. Retrieved from https://phys.org/news/2017-02-advanced-robotic-surgeon.html [6] History of Robotic Surgery and FDA Approval - Robotic Oncology. (n.d.). Retrieved from https://www.roboticoncology.com/history-of-robotic-surgery/ [7] Da Vinci Surgical System. (n.d.). Retrieved from https://www.davincisurgery.com/ [8] Perriello, B. (2016, November 10). TransEnterix bets on haptics for Senhance robot-assisted surgery device. Retrieved from https://www.massdevice.com/transenterix-bets-haptics-senhance-robot-assisted-surgery-device/ [9] Moon, M. (2017, October 14). FDA-approved robot assistant gives surgeons force feedback. Retrieved from https:// www.engadget.com/2017/10/15/fda-approves-senhance-robotic-surgical-assistant/ [10] Anonymous. (2017, September 07). Soft Robotics Bring Increased Dexterity to Surgery. Retrieved from https://www. mddionline.com/soft-robotics-bring-increased-dexterity-surgery 11. Matthews, K. (2019, January 18). Surgical Robots Improve and 5 Providers to Watch in 2019. Retrieved from https:// www.roboticsbusinessreview.com/health-medical/5-surgical-robots-2019/ [12] Marescaux, J., Leroy, J., Rubino, F., Smith, M., Vix, M., Simone, M., & Mutter, D. (2002, April). Transcontinental robot-assisted remote telesurgery: Feasibility and potential applications. Retrieved from https://www.ncbi. nlm.nih.gov/pmc/articles/PMC1422462/ [13] Cyberknife. (n.d.). Retrieved from https://www.cyberknife.com/ [14] Genesis Healthcare Partners. (n.d.). Genesis CyberKnife. Retrieved from http://www.mygenesishealth.com/treatment-options/genesis-cyberknife/ [15] Reinkensmeyer, D. J. (2018, December 07). Rehabilitation robot. Retrieved from https://www.britannica.com/technology/rehabilitation-robot [16] Rehabilitation and assistive robotics - researchgate.net. (n.d.). Retrieved from https://www.researchgate.net/publication/315470980_Rehabilitation_and_assistive_robotics [17] Manus ARM. (n.d.). Retrieved from http://robotics.cs.uml.edu/research/interface-manus-arm.php [18] Thomson, E. A. (2000, June 07). MIT-Manus robot aids physical therapy of stroke victims. Retrieved from http://news. mit.edu/2000/manus-0607 [19] The strong robot with the gentle touch. (n.d.). Retrieved from http://www.riken.jp/en/pr/press/2015/20150223_2/ [20] Hurst, D. (2018, February 06). Japan lays groundwork for boom in robot carers. Retrieved from https://www.theguardian.com/world/2018/feb/06/japan-robots-will-care-for-80-of-elderly-by-2020 [21] Grasso, C. (2018, June 26). Challenges and advantages of robotic nursing care: A social and ethical analysis. Retrieved from https://corporatesocialresponsibilityblog.com/2018/06/26/robotic-nursing-care/ [22] Abrams, M (ASME 2018, June). Making the Emotional Robot https://www.asme.org/engineering-topics/articles/robotics/making-the-emotional-robot
44
45
To What Extent Does a Person’s Genetics or Environment Influence Their Likelihood of Having Perfect Pitch Amy Wood (Year 13, Keller)
https://www.psychologicalscience.org/news/releases/perfect-pitch-may-not-be-absolute-after-all.html
The brain is one of the most complex biological systems in the world, and provides all of us with the ability to regulate bodily functions, think and interpret sensory information gained from our environment. The human brain is made up of four main sections: the cerebrum, medulla oblongata, hypothalamus and the cerebellum. The cerebrum is the largest section of the brain and is divided into the left and right hemispheres. Whilst the cerebrum as a whole is responsible for vision, thinking, learning and emotions, the left and right hemispheres are each responsible for different functions, particularly when it comes to learning. The left hemisphere is where the brain processes mathematics, languages, logic, analysis and facts. In contrast, the right hemisphere processes arts, music, rhythm, imagination and intuition [1]. HOW DOES THE BRAIN PROCESS SOUND? We are constantly interpreting sound, and our brains are able to quickly differentiate between background noises and sounds in just 0.05 seconds [2] due to the close proximity of the ear to the brain and the connections provided by the nervous system. Sound reaches the brain once it has completed its journey from the surrounding environment and through our ear. Our ear detects a sound when the particles in the air between the disrupted environment and the ear are displaced from their original, stationary positions, creating a sound wave. Sound waves are longitudinal, meaning that these particles will oscillate parallel to the direction of wave travel.
Once energy has been provided from a disruptor, these particles are forced to move a certain distance away from their original locations, and will try to return to their original positions as quickly as possible, however, these particles will ‘overshoot’ the same distance in the other direction from their original location as they are travelling too fast to stop, and will continue to travel back and forth through the air until they run out of energy and reach their original, stationary positions [3]. 1 2
Henny Kupferstein and Susan Rancer, Perfect Pitch in the Key of Autism, p. 35 “How quickly does sound reach our brain?” https://blog.medel.com/the-speed-of-hearing/
46
The human ear is comprised of the outer, middle and inner ear. Crucial tissues are located within the middle and inner ear, whilst the outer ear acts as a funnel, providing a clear ‘pathway’ for sound to travel through. Sound waves will then travel through the auditory canal and through the tympanic membrane, more commonly known as the eardrum [4]. These waves cause the tympanic membrane and surrounding bones to vibrate, passing these vibrations down to the cochlea, located in the inner ear. Sound passes from the cochlea to the neurons in the nervous system via the fluid-filled, semi circular canals, in the form of vibrations [5]. Neurones are specialised nerve cells, which transport messages to the brain in the form of electrical impulses. Information is easily collected by neurones due to the large surface area provided by the dendrites, as shown by the diagram above. Messages travel across the neurone by ‘jumping’ across the Nodes of Ranvier, out through the axon terminal and into the synaptic cleft (very small space between adjacent neurones) repeatedly until this information reaches the brain stem, located at the base of the brain. The brain also interprets the volume of the sound by recognising the wave’s amplitude or height (the maximum displacement of particles from the wave’s equilibrium line). For those with Perfect Pitch, the brain will also be able to correctly identify the note produced by the sound wave, by recognising the frequency of the sound wave. This could suggest that those with Perfect Pitch have stronger connections between the auditory centre of the brain and linguistic processing in the left hemisphere, potentially giving those with Perfect Pitch a slightly different brain structure. WHAT IS PERFECT PITCH? Perfect or Absolute Pitch is defined as: “the ability to correctly name any musical note that you hear or to sing any musical note correctly without help [6]”. Historically, Perfect Pitch was subjective, as many different countries and even cities had their own set of pitches, for example, centuries ago, the length of a flute made in Germany would differ to the length of an English flute, causing the supposedly same pitches to sound different. In 1939, a committee in London decided on a standard set of pitches, which we continue to use today. Therefore, if someone nowadays says that they have Perfect Pitch, this means that they are able to produce the pitches decided by the London Committee [7]. What makes this ability so special? A person with Perfect Pitch has managed to memorise all the notes on a piano (or other instruments) and it is just about certain they have managed this incredible memory feat before the age of six [8]. According to the article The Enigma of Perfect Pitch [9], Dr. Muriel declares Perfect Pitch to be of unusual interest for two reasons: “In the first place it is usually, but not always, the sign of musical endowment, of very fine and efficient ear-mindedness. The pressure of musical training, from the tonic sol-fa system onward, is all towards relativity. If a pupil is able, in spite of this, to preserve a considerable power of absolute judgment, it means that he has kept an unusually clear-cut and well-defined apprehension of the tonal system as a whole; and this argues an unusual auditory disposition. In the second place, we see here once more the tremendous value of the perception of tonality in particular, and of the tonal environment in general, for the efficient operation of the musical mind. [10]” Powell, J. How Music Works, pp. 27-28 “How are the tissues of the ear protected?” https://www.myvmc.com/anatomy/ear/ 5 “How does the brain interpret sound?” https://www.webmd.com/cold-and-flu/ear-infection/picture-of-the-ear#1 6 “Definition of Perfect Pitch” https://www.merriam-webster.com/dictionary/perfect%20pitch 7 Powell, J. How Music Works, p. 10 8 Powell, J. How Music Works, p. 12 9 Schirrmann, C. F. “The Enigma of Perfect Pitch”. p.33 10 James L. Mursell, “Principles of Musical Education”, p.21 3 4
47
Whilst those with Perfect Pitch are able to identify notes from a variety of different sources, there are some factors, which may increase the time taken for someone with Perfect Pitch to identify a note. A test carried out by Thomas L. Durham and Michael H. Stevens [11] found that participants with Perfect Pitch came within one semitone of the correct note 86% of the time, whilst participants without perfect pitch only came within one semitone of the correct note 26% of the time, making the Perfect Pitch participants over three times more accurate than the other participants. Despite this, the results of studies done by Baird and von Kries showed that the timbre of an instrument can affect the accuracy of someone with Perfect Pitch (Fig 3). A person with Perfect Pitch was accurate nine percent more often when guessing the pitch on a piano in contrast to a sound produced Figure 3: How often notes are correctly by the Pure Tone. identified from a variety of sources
When the London Committee set their universal notes in 1939, these pitches were decided by determining their fundamental frequencies. Alongside the fundamental frequency of a note, each note is made up of a combination of harmonics. Harmonics are a series of multiples of the fundamental frequency, which vibrate simultaneously with the fundamental frequency, for example, the harmonics that accompany the fundamental frequency of an A1 (110Hz) are 220Hz, 330Hz, 440Hz, and so on. The combination of harmonics involved in the note ‘middle C’ produced on a violin predominantly features the fundamental frequency, accompanied by the second, fourth and eighth harmonics. In contrast, the same middle C played on a flute predominantly features the second harmonic backed up by the fundamental and third harmonics [12] , and these different combinations cause the same notes to have different timbres on different instruments. People with Perfect Pitch will be able to identify notes played on the instruments they play the most easily because they have been exposed to and become used to the specific harmonics involved in each note. If a flautist then hears an E being played on a guitar string, the flutist’s brain may take longer to analyze the difference in harmonics, even though the same note is being played [13]. Whilst those with Perfect Pitch are able to correctly reproduce a note without reference, Durham and Steven’s experiment found that Perfect Pitch participants found some pitches easier to identify than others. Figure 4 demonstrates participants were able to identify the notes C, G and D most easily. This is interesting because G is the dominant of C and D is the secondary dominant of C (and the dominant of G). This could be because the keys of C, G and D are commonly used, implying the participants were likely to have had greater exposure to these Figure 4: How often specific notes are identified correctly notes and were consequently able to identify them faster. One study into Perfect Pitch reports that only 1 in 10,000 people have Perfect Pitch [14], however, many studies produce differing statistics as to the actual figure of people with Perfect Pitch. Many people may have Perfect Pitch but are unaware of it due to a lack of musical training or knowledge of its existence, making it difficult to record the actual number of people with it. Presently, musical psychologists, scientists and musicians are unsure whether Perfect Pitch is caused by genetics or influenced by the environment a person grows up in. For many of those studying Perfect Pitch, “the debate over whether the strengths and weaknesses of people are the result of nature or nurture [has raged], and somewhat continues to rage on between scholars and lay people alike [15]”, so what does the current evidence prove? Durham, T. L. and Stevens, M. H. “Perfect Pitch Revisited”, pp. 32–35 Powell, J. How Music Works, p. 45 13 Powell, J. How Music Works, pp. 39-47 14 Sacks, O. Schlaug, G. Jäncke, L. Huang, Y. and Steinmetz, H. “Musical Ability” in Science, New Series, pp. 621-622 15 “The Nature vs. Nurture Debate” https://www.medicinenet.com/nature_vs_nurture_theory_genes_or_environment/article.ht m#what_is_the_nature_vs_nurture_who_created_the_theory 11
12
48
CURRENT EVIDENCE FOR THE NATURE ARGUMENT Some scientists believe that Perfect Pitch may be caused by a gene, and in the 20th century, the American geneticist Dr Madge Macklin had suggested that not only does one inherit Perfect Pitch, but a recessive gene carries it [16]. Being caused by a recessive gene would help explain why the number of people with Perfect Pitch remains so low, as a person would have to inherit two copies of the Perfect Pitch allele, in order to express it (they must be homozygous recessive). Although the quest to find a ‘Perfect Pitch gene’ continues, no genes have been found to date. Despite this, many people who have Perfect Pitch have family members who also possess Perfect Pitch. In 1995, neurologists https://hekint.org/2017/01/28/madge-thurGottfried Schlaug and Helmuth Steinmetz of Heinrich Heine Univerlow-macklin-medical-genetics/ sity investigated whether Perfect Pitch was caused by physical differences in the brain. They took brain scans of 30 musicians (11 had Perfect Pitch) and 30 non-musicians, using magnetic resolution scanning. The neurologists made two discoveries from their research: firstly, they found that the planum temporale, a region in the middle of the cerebrum, was much larger in the left hemisphere than in the right hemisphere for both groups. Their second discovery was that the planum temporale in the left hemisphere of the professional musician’s brains were nearly double the size than those of the non-musicians [17]. This demonstrates that in musicians, the region of their brain in which they can process music is larger, giving them a more refined and developed planum temporale than the non-musicians. Alternatively, Henny Kupferstein, who identifies as an autistic scholar, and Susan Rancer, a music therapist carried out their own investigations into those who had Perfect Pitch. They “tested musically trained musicians, many with PhDs, alongside pre-verbal, non-verbal and non-musically trained individuals for Absolute Pitch. Not surprisingly, years of training did not correlate with Absolute Pitch, meaning that those with college degrees in music still did not have Absolute Pitch. In fact, less than half of the musically trained individuals had it, yet with our autistic subjects, all except for one (97%) had Absolute Pitch, whether they were verbal or not [18].” Autism Spectrum Disorder is the name of a range of similar conditions that affect a person’s social interaction, communication, interests and behaviour. Autism is a lifelong condition and is thought to be a result of complex genetics, with cases of autism being found to ‘run in families’ [19]. Whilst an autism-causing gene has yet to be identified, the discovery of either an autism gene or a gene causing Perfect Pitch would allow scientists to understand whether there were similarities between the genetic sequences of autism and Perfect Pitch, and whether these similarities create the high correlation between the two. It should be noted, however, that many autistic people tend to have a strong talent specifically limited to one area, and Perfect Pitch is only one of the many areas in which autistic people tend to thrive. Alongside their research for the number of people with Perfect Pitch, Henny believes that people with Perfect Pitch demonstrate unique behaviours, including: • When playing an instrument, he or she tries to match a certain melody to his or her particular instrument • The person gets easily annoyed when others sing along to a song, especially in confined spaces such as cars • The person has a gift for impersonation, or an ability to produce dead-on “pitch-perfect” imitations of accents, whether regional or foreign • Habitually rushing through music whilst playing • Being able to play a piece with his or her fingers without an instrument being present [20] Aside from the behaviours listed above, Henny believes that people with Perfect Pitch demonstrate further unique behaviours depending on whether their Perfect Pitch is more left-brained or right-brained. Durham, T. L. and Stevens, M. H. “Perfect Pitch Revisited”, pp. 32–35 Nowak, R. “Brain Centre Linked to Perfect Pitch” in Science New Series, p. 616 Henny Kupferstein and Susan Rancer, Perfect Pitch in the Key of Autism, p. 36-37 19 “What is autism?” https://www.nhs.uk/conditions/autism/causes/ 20 Henny Kupferstein and Susan Rancer, Perfect Pitch in the Key of Autism, p.6-7 16 17 18
49
Henny Kupferstein has her own Variations Theory, stating “Absolute pitch presents itself in two stark variations, with a spectacular middle ground.” Henny identifies these variations as left-brain absolute pitch (LBAP) and right-brain absolute pitch (RBAP) [21]. She tests people for Left and Right Brain Absolute Pitch by using the following table:
EVIDENCE FOR ENVIRONMENTAL FACTORS (NURTURE) The number of people with Perfect Pitch in countries such as the United States of America and the United Kingdom are estimated to be as low as 1 in 10,000 people; however, the number of people in countries such as China and Vietnam with Perfect Pitch are much higher. Researchers believe that this is due to the languages spoken. Both Chinese and Vietnamese are tonal languages: producing a word in these tonal languages is a cross between singing and speaking. The pitch at which you ‘sing’ a word in a language like Mandarin is vital to communication, as each word has several different unrelated meanings depending on its pitch. In Mandarin, the word mā (said at a constant pitch in an almost sung manner), means mother, however mǎ (pronounced by falling in pitch and then rising), means horse. As a result, those who speak tonal languages pay much more attention to pitch in order to differentiate sounds and interpret meanings from a young age. In contrast, as English speakers usually do not need to worry about the way in which they pronounce a word, most English speakers will not pay attention to pitch, and will, therefore, be much less likely to acquire Perfect Pitch than their tonal language-speaking counterparts [22]. 21 22
Henny Kupferstein and Susan Rancer, Perfect Pitch in the Key of Autism, p.36-37 Powell, J. How Music Works, p. 13
50
Nevertheless, it is important to take into account that countries such as China will naturally have a much larger population of citizens with Perfect Pitch because its population is much greater than many countries. SURVEY RESULTS In order to further investigate whether Perfect Pitch is influenced by genetics or a person’s upbringing, I carried out an online survey, which consisted of the following questions: 1. Do you play a musical instrument? If so, what instrument(s) do you play and how long have you been playing them? (This can include singing) 2. Do you use a tuner to tune your instrument? 3. Perfect pitch is the ability to identify a pitch without using a reference note (for example, you may hear a sound on the MTR and identify it as an A). Relative pitch is the ability to identify a pitch using a reference note (you may be told what an F sounds like and are then able to work out the sound on the MTR is an A). Do you have perfect pitch or relative pitch? 4. If you have perfect pitch or relative pitch, when did you first realise this? (For example, I first realised this in primary school when I heard a song I liked on the radio and began playing it on the piano without looking at a score or chord chart.) 5. If you have perfect/relative pitch, do you find it harder to identify a pitch on an instrument you don’t play? (For example, it takes you longer to identify it) 6. If you have relative pitch and are told the key of a piece and how the first note in the key sounds, how do you hear a piece of music/figure out a pitch? (For example, you may be told the piece is in A major and you are told what an A sounds like) 7. If you have perfect pitch, how do you hear a piece of music? (For example, does your brain identify every single note in the piece? Do you try to work out the key of the piece?) Please go into depth as much as possible 8. Does anyone else in your family play a musical instrument and/or have perfect pitch? 9. Do you speak a tonal language? (E.g. Mandarin, Cantonese, Vietnamese or Thai) 10. Do you consider mathematics to be one of your stronger subjects? 11. Finally, if you have perfect pitch do you find it easier to identify sharp or flat keys? In total, 114 responses from participants with Perfect Pitch, Relative Pitch and ‘no’ pitch were recorded in order to allow comparisons to be made between the groups. The participants were mostly recorded at school; however, being in an international school has allowed me to collect a range of data from participants with various cultural and ethnic heritage. The questions included in the survey allowed data, which looked at both environmental and possible genetic factors to be collected. Both the linguistic and mathematics-based questions were included in order to further investigate if there is a correlation between being linguistically or mathematically inclined and having either Perfect or Relative Pitch. PERFECT PITCH RESPONSES • • • • • • •
30/114 participants had Perfect Pitch 21/22* had been singing/playing for more than 5 years, with 15 participants under 18 years of age 12/30 had a family member with Perfect Pitch (1 doesn’t sing/play) 5/30 had a family member with Relative Pitch 23/30 spoke a tonal language (another participant was fluent in French) 22/26 did not use a tuner* 19/30 said they were good at mathematics *Not all students provided a response to the question.
Interestingly, 95% of participants have played an instrument or sung for five or more years. As most of the participants are school students under the age of 18, meaning that playing for at least five years would make up quite a large segment of their lives. This supports the theory that being brought up in a musical environment increases the likelihood of developing Perfect Pitch and also reinforces the suggestion that if Perfect Pitch is 51
developed, it must be achieved at a very young age. Additionally, the high number of participants recorded in the survey that speak a tonal language and also have Perfect Pitch suggests that there is a causal relationship between tonal language speakers and Perfect Pitch. Most of the participants live in Hong Kong and belong to Asian ethnicities, which could support the theory that Perfect Pitch is much more widespread in Asia than it is in the English speaking world. The high numbers of participants who do not use a tuner is not surprising, as those with Perfect Pitch tend to have a greater awareness and sensitivity to sound. Whilst the aforementioned factors indicate that the environment in which a person grows up in can influence their likelihood of having Perfect Pitch, the results also indicate that a person’s genetics could increase their likelihood of having Perfect Pitch too. The chart below shows that 57% of participants have a family member who had either Perfect or Relative Pitch.
Such high numbers could potentially support Macklin’s theory that Perfect Pitch is in fact inherited through the participant’s family genes. The idea that the “Perfect Pitch gene” is recessive is supported by the survey results, which found that 40% of participants have another family member who also had Perfect Pitch. If the ‘gene’ had been caused by a dominant allele, the number of participants who have Perfect Pitch would have been much higher, as the ‘characteristic’ for Perfect Pitch would be shown regardless of whether the person’s alleles were homozygous dominant or heterozygous. However, if it is a recessive allele, those with Perfect Pitch must inherit two copies of the Perfect Pitch allele in order for it to be expressed. The results showed that almost two thirds of participants with Perfect Pitch reported being good at mathematics. It is impossible to say that there is a causal relationship between being good at mathematics and generally having Perfect Pitch, as some participants did not feel they were mathematically inclined at all, or said that mathematics was neither their strongest or weakest subject. Nevertheless, there could be a correlation between the type of Perfect Pitch and mathematical ability, as problem solving is normally processed in the left hemisphere of the cerebrum. Those participants who reported being good at mathematics may be supporting Henny Kupferstein’s theory, which treats Perfect Pitch as being on a spectrum, and is unique in each person who has it depending on whether they are more dominant in the left or right hemisphere. Kupferstein’s theory is further supported by the individual responses obtained from question seven from my survey. A couple of participants wrote that they would remember the notes in the piece by remembering the intervals between each note, rather than remembering the note names. Many participants would work out the key of the piece first and focus on the melody line, saying that they would instantly know what the note is, as their brain would “just tell them the note name” or they could “feel the key of the piece”. These responses could disprove the theory the Perfect Pitch can be taught, because whilst it is possible to practice aural skills every day and relate intervals to songs, it is impossible to develop a distinct ‘feeling’ for something and be able to instantly identify a key or note in the same way as those with Perfect Pitch do. Interestingly, many of the participants wrote that they would be able to remember a piece of music entirely and they would later sing or play the song again on their respective instruments, supporting Kuferstein’s theory that Right-Brained 52
Perfect Pitch participants can have a permanent aural memory. Finally, one participant wrote “each piece has a different colour depending on the key and the tempo.” This demonstrates that the participant probably has synaesthesia concurrently. Synaesthesia is another phenomenon, whereby those who have it ‘hear colours’ when listening to music [23], for example, they may report that the exposition of a Mozart sonata is green whilst the development section is red. Little is known about synaesthesia or what causes it, however, it is possible to have both Perfect Pitch and Synaesthesia. RELATIVE PITCH RESPONSES • • • • • • •
38/114 have Relative Pitch, with 29 participants under 18 years of age 24/29* participants have been singing/playing for more than 5 years 10/38 participants have a family member with Perfect Pitch 15/38 participants have a family member with Relative Pitch 24/38 participants speak a tonal language 12/38 do not use a tuner 19/33* feel they are good at mathematics *Not all participants responded to these questions.
83% of the Relative Pitch participants have played an instrument or sung for five or more years. This suggests a strong correlation between musical training and both Perfect and Relative Pitch, supporting existing theories, which claim that people can develop Perfect and Relative pitch through intensive training. The high number of Relative Pitch participants who have played for more than five years and are under 18 supports this. An additional 12% of respondents with Perfect Pitch have played an instrument for at least five years compared with Relative Pitch participants, highlighting that more of the Perfect Pitch participants were exposed to music for a longer period of time than their Relative Pitch counterparts, and were therefore more likely to have notes stored and their long-term memory permanently. More than half of the participants spoke a tonal language, further emphasising the potential correlation between speaking a tonal language and having Perfect Pitch or Relative Pitch. Unlike the Perfect Pitch participants, more Relative Pitch participants used a tuner, which is likely to be because although they are able to work out what notes are being played, they do not always hear small differences in tuning like the Perfect Pitch participants do. In response to question six, 18 participants wrote that they use the tonic of the key and identify notes by using intervals. Many of them used the tonic sol-fa system, and this approach is in stark contrast to the Perfect Pitch participants, showing a difference in potential cognitive approach to hearing and interpreting sounds. 65% of participants who had Relative pitch reported having family members with either Perfect or Relative Pitch, as shown by the chart below:
23
“Synaesthesia and Music Perception” https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5618987/
53
Despite the influence of the aforementioned environmental factors, it is interesting to see that nearly two thirds of participants with Relative Pitch have another family member with either Relative Pitch or Perfect Pitch. Whilst this statistic is quite high, it is low enough to further support Macklin’s recessive gene theory. It is impossible to determine from the results from the Relative Pitch participants if there is a causal relationships between mathematical ability, however, it could indicate whether the participant is more left or right brain inclined. Finally, as over a third of participants do not have family members with either Perfect or Relative Pitch, this demonstrates that it is possible to develop pitch recognition without obvious genetic inheritance, but not always guaranteed through musical training. GROUP THREE RESPONSES • • • • • • •
46/114 participants do not have Perfect or Relative Pitch 16/46 have been singing/playing a musical instrument for five years or more 11/46 have a family member with Perfect Pitch 8/46 have a family member with Relative Pitch 39/46 speak a tonal language (another participant was fluent Spanish) 5/16* do not use a tuner 15/41* feel they are good at mathematics *Not all participants responded to these questions.
Whilst 95% of Perfect Pitch participants have been playing an instrument or singing for at least five years, this was the case for only 35% of Group Three respondents. This supports the theory that musical training, particularly from a young age, is likely to lead to pitch development. Additionally, a further 29% of the Perfect Pitch participants were felt that they were good at mathematics. However, all but seven of the Group Three participants spoke a tonal language, suggesting that although speaking a tonal language can influence the acquisition of Perfect or Relative Pitch, linguistic ability alone does not guarantee this. Interestingly, the Perfect Pitch participants were just as likely to have a family member with Relative Pitch as their Group Three counterparts (17%), strongly implying that Relative Pitch can be taught. In contrast, an additional 16% of Perfect Pitch participants had another family member with Perfect Pitch than their Group Three counterparts, further implying that Perfect Pitch cannot be taught and that Macklin’s recessive gene could exist and play a part in developing Perfect Pitch. DISCUSSION Carrying out the online survey has allowed me to further understand how people interpret music and draw comparisons between the participants. However, the survey results have some limitations for the following reasons: 1. Previous studies state that only 1 in 10,000 people have Perfect Pitch, yet over ¼ of the participants reported having Perfect Pitch 2. Most of the participants live in Hong Kong and speak a tonal language 54
3. Many of the participants have musical training and are actively involved in various musical ensembles 4. Mathematics is viewed as a highly important subject in Hong Kong and many students are involved with extra tuition to further develop their mathematical ability 5. There is an uneven distribution of participants from Asian countries and Western countries In response to the unusually high number of participants with Perfect Pitch, I tested some participants for Perfect Pitch from this sample group by asking them to identify eight pitches played on a piano. All of these participants correctly identified all of the pitches, meaning I was unable to alter my statistics for Perfect Pitch participants. It was impossible for me to test each participant as some of them lived overseas. In addition, I tested a small sample size of Relative Pitch Participants by giving them a named reference note and testing them on pitch recognition, and was able to slightly reduce the number of participants with Relative Pitch as not all of the participants could correctly identify more than half of the pitches. Secondly, the uneven numbers of participants from Hong Kong in comparison to Western countries such as England or Australia created an environmental bias within my survey. Most participants spoke a tonal language, limiting my ability to draw conclusions between linguistic ability and Perfect Pitch. Additionally, a large amount of student responses recorded from Harrow International School showed that they were actively involved in musical training and with school ensembles, whereas many of the participants outside of school did not have much musical training or any active involvement in music. Finally, many of the participants outside of school were adults, meaning that unlike the Harrow students, they were not actively involved in extra mathematic tuition or using maths unless it was required as part of their degree or profession. CONCLUSION In conclusion, everyone processes music in the right hemisphere of their cerebrum, musically trained musicians will have a planum temporale almost double in size compared with non-musicians, demonstrating that those with Perfect or Relative Pitch have different anatomical structures in their brains from non-musicians. Additionally, Perfect pitch is unique to each individual who possesses it, and the way in which their Perfect Pitch is approached is determined by their inclination to being either more left or right-brained, as explained by Kupferstein’s theory and further highlighted by the survey results. Whilst no particular gene has yet been discovered or linked to Perfect Pitch, genetics are likely to lead to a person developing Perfect Pitch, as the results have suggested that Macklin’s proposed recessive gene may indeed be the cause of inherited Relative or Perfect Pitch. These factors, combined with the aforementioned responses from the Perfect Pitch participants suggest that Perfect Pitch cannot be taught; a person must already display specific anatomical differences, and may combine environmental factors, such as musical training in order to realise that they have Perfect Pitch. Nevertheless, environmental factors, such as being exposed to music and tonal languages from a young age may encourage people to develop a stronger ear and may lead to the acquisition or enhancement of Relative Pitch. This is shown by the statistics produced by the number of Relative Pitch participants with prolonged exposure to music and musical training. However, if a person wishes to acquire Relative Pitch, they must begin their musical training from a very young age. The results show that as 95% of the Perfect Pitch participants had played an instrument for at least five years of their life and most of them were under 18, they were likely to have remembered the sound of each note from a very young age.
55
An Investigation Into Designing A Low Cost and Biodegradable Sanitary Napkin for Women in Rural China Glory Kuk (Year 13, Keller)
https://ph.ucla.edu/news/magazine/2018/springsummer/article/delivering-women-rural-china
ABSTRACT
Menstrual hygiene problems are often neglected in developing countries due to the lack of education and financial means. This poses huge environmental risks, specifically in rural China, although the project is conducted in Hong Kong. This study aims to target these problems from a materials’ perspective: using absorbent biodegradable materials, creating homemade, starch-based plastic and looking into reusable, waterproof fabrics. The biodegradable plastic created was measured in terms of biodegradability and exposure to mediums like air and water. Tests for absorbency, durability, comfort and drying time were also tested and measured. The results can be used to set standards for future studies related to menstrual hygiene.
INTRODUCTION
1. Affordability and Sustainability There is a lack of attention regarding menstrual hygiene in developing countries due to its taboo nature [2]. Studies have found that an inadequacy of menstrual hygiene can result in reproductive tract infections [1]. At a glance, over 90% of adolescent girls are using old cloths in rural areas [1]. Governments have shown increasing awareness towards this subject, but there is still no generalised characterisation for the types of materials that should be used or the properties to be measured for this application.
https://www.indiamart.com/proddetail/ sanitary-nakin-8005655933.html
With an estimate of 13 menstrual cycles in a year and an average usage of 11 napkins per cycle, the average woman needs around 4,500 sanitary napkins in her lifetime [3]. Although absorbent pads are considered normal in high-income countries, a large study in India showed that only 12% of women use commercial menstrual products, and over 70% of women stated that cost was a major barrier [1]. Additionally, each sanitary napkin contains around 2g of plastic which takes 500-800 years to biodegrade [4]. This poses a huge environmental risk as the amount of disposed menstrual products accumulates over time. Reusable products would be ideal to solve both affordability and sustainability problems. Products such as menstrual cups are commonly used in the United States, but regions (including China) view invasive absorbents as culturally unacceptable, so sanitary napkins which are easy to make can target both sustainability and cost. The specific area of research is rural China, as it is near Hong Kong. Hence raw materials obtained in this project will apply to the situation there, where people can acquire these materials at a cheaper price. This study aims to target the affordability problem by searching for cheap, absorbent materials that can also 56
easily be bought in rural China. Creating biodegradable plastics or using cheap water resistant fabrics that can be reused targets the sustainability problem. 2. Characterisation of Fabrics
Fabrics can be categorised in terms of style, utility, durability and product production [6], air permeability, heat transmission, tensile strength, the coefficient of friction etc, but this study will only test the four most important characteristics of a sanitary napkin: absorption, durability, comfort and drying time.
Absorption is defined as the ability of a fabric to take in moisture. There are different methods to test the absorbency of a fabric; AATCC/ASTM Test Method TS-018 measures the time taken to absorb a drop of water [7]. Tang, Kan, Fan [8] suggests vertical wicking as a measurement for wettability. Beskisiz, Ucar, and Demir [9] uses water absorption values – the ratios of wet mass over dry mass, and drying time to determine the http://www.simcoemuskokahealth.org/Topics/SafeWater/drinkingwaeffectiveness of super absorbent fibres. ter/Drinking-water-safety-and-floods Due to frequent body movement, durability is essential. The properties determining textile durability include resilience, tensile strength and abrasion resistance [14]. In this study, a tensile strength test will be performed using a Benchtop tester. Comfort is also a fundamental need, though its subjectiveness makes it difficult to measure. Discomfort is commonly defined as prickle, itchy, hot and cold [10]. Song lists the various aspects of comfort in clothing as thermal comfort, sensorial comfort and body movement comfort [10]. This study aims to measure body movement comfort in terms of fabric stiffness, meeting the sensorial comfort criteria. According to ASTM Standard D123 (2003), terms that are crucial for describing fabric handle include flexibility, compressibility, surface friction etc. This study will use the cantilever principle to measure the bending stiffness of the fabrics collected [11]. Drying time is measured for the reusable aspect of the napkin. No specific method was followed; dryness is ‘measured’ by the touch of the hand.
BIODEGRADABLE PLASTIC
PREPARATION OF HOMEMADE PLASTIC
1. Materials used Glycerol, vinegar, corn starch and tap water.
2. Preparation of Mixture Starch is comprised of amylose – a long, helical polymer, and amylopectin – a branched, short polymer, which results in weak, brittle plastic. To enhance the properties of plastic, vinegar breaks up the branches of amylopectin through acid hydrolysis whilst glycerol acts as a plasticizer: a lubricant at the molecular level [5], so the more glycerol the more flexible the plastic becomes. After going through trials of the plastic being too sticky or cracked too easily, below is the final recipe:
• • • •
Corn starch Water Glycerol Vinegar
– 15ml – 30ml – 15ml – 10ml
Figure 2: Example of cracked samples
57
3. Process Apparatus: • Large pot • Stove • 100ml measuring cylinder • Ruler (to smooth the surface) • Spoon (to stir the mixture) • Baking paper/Tracing paper
Figure 3: General apparatus used
The ingredients are blended by hand at room temperature and heated in a pot on a medium setting (around 300ºC) until the mixture starts bubbling. The mixture will slowly start to gelatinize; at this stage, stirring constantly is crucial otherwise the mixture will burn or become uneven. After all of the liquid turns into a gel, the mixture is poured onto a baking sheet and the surface is smoothed with a ruler. The sample is left to dry and harden completely for 24 hours in room temperature.
Figure 4: Sample’s geletanisation process
4. Experimental Techniques The sample is heated and cooled using constant temperature settings, the pot and stove are cleaned thoroughly to prevent contamination and the surface of the mixture is smoothed to ensure constant thickness.
CHARACTERISATION METHODS OF PLASTIC
1. Biodegradability – Agar Plate Test The samples are cut into 3 equally small cubes and left on an agar plate for a week. Bacteria are added using a cotton bud to scrape the table surface 10 times and scrapping it onto the agar plate another 10 times before adding the samples. The control setup will be made in which no bacteria is transferred to the plate. The plates are kept at room temperature in a cupboard. 2. Exposure to different media Samples are exposed to air, water, and blood for 10 days to spot changes in the samples used over time. Trials will run at room temperature at a 50%-60% humidity: Air: The sample will be exposed directly to air Water: The sample will be completely submerged in a bowl of water Blood: Drops of fake blood made of flour and water will be dropped onto the sample. Red colouring is added in order to see how the liquid moves within the sample.
RESULTS
1. Biodegradability – Agar Plate Test All agar plates exhibited huge bacteria growth; the control plate also shows bacteria growth, but less than the others – this could be due to the fact that there was already bacteria present on the sample. The plastic has gone 58
through rapid decomposition, so it is in fact biodegradable, but could be prone to bacteria growth.
Start of week
End of week (front) Figure 5: Biodegradability test results
End of week (back)
2. Exposure to different mediums Air The sample exhibited little change throughout the 10 days. There was no cracking or obvious change. After extending the course to a month, the only observable change was the slight stiffness in the sample.
Day 1
Day 2
Day 5
Day 10
Day 30
Figure 6: Sample’s exposure to air progress
Water The sample underwent quite a drastic change over 10 days. After a day, the colour of the sample went from transparent to white, and became very brittle. Using the slightest force to pick it up would result in breaking. After around 5 days, mould started to grow in the water, creating a foul smell.
Day 1
Day 5
Day 2
Day 10
Figure 7: Sample’s exposure to water progress
Fake Blood The sample showed little change as only 3 drops of fake blood was put onto the sample. The sample absorbed 59
the fake blood and it diffused throughout the sample. The plastic itself didn’t undergo much change in terms of appearance or texture.
Day 1
Day 5
Day 2
Day 10
Figure 8: Sample’s exposure to fake blood progress
FABRICS
RAW MATERIALS 1. Structure Comparison to Store Bought Napkins The structure of store bought napkins was analysed to identify the fabrics needed. Brands of sanitary napkins analysed included: Whisper, Seventh Generation, Laurier and Kotex. Whisper
240mm long, 1-2mm thick
The napkin has small holes on the top layer for increased breathability. It has 3 layers in total with the top and bottom layer forming a pocket that encloses the absorbent core. Whisper - Close up of top layer
Whisper - Dissection of napkin
Kotex
410mm long, 4 5mm thick
The napkin has holes on the top layer like a mesh; containing borders to prevent leakage. There are 4 layers (form a pocket) with a thin sheet underneath the top layer and the absorbent core made from cotton fibre. Kotex - close up of top layer
Kotex - Dissection of napkin
Seventh Generation
270mm long, 2-3mm thick
The napkin does not have holes on its top layer though it is slightly rough. The absorbent core contains a small amount of fibre and 3 layers that form a pocket. SG - close up of top layer
SG - Dissection of napkin
Laurier
170mm long, 1mm thick
This napkin is the smallest out of the 4. Its top layer has no holes and the layers are difficult to separate. Three layers are used to form a pocket.
Laurier - close up of top layer
60
Laurier - Dissection of napkin
Similarities are seen in terms of the main structure: each composes of 3-4 layers, the top being a mesh-like sheet for breathing, the middle being an absorbent and the bottom being a plastic waterproof sheet with adhesive on the bottom. Differences are also observed e.g. some top layers have holes whilst others are mesh-like; some absorbent cores are a thin sheet whereas other have cotton fibres. 2. Raw Materials Collected and Cost A number of fabrics are acquired from a local market; the price in HKD for one yard (area of 1m2) or in a specific unit is listed next to the materials. The dimension for the sanitary napkin is going to be 14cm x 7cm, hence the area of 98cm2 ≈ 100cm2. The price of each material is divided by 10 and listed in the third column in the table below. *PUL has cotton terrycloth as its inner layer Raw Materials List and Price:
PRELIMINARY CHARACTERISATION METHODS OF RAW MATERIALS
Preliminary tests are run on raw materials in order to determine the most suitable fabrics for the absorbent layer and waterproof layer, to select the best fabrics that can be combined for further testing. 1. Nature of Fabric 3 drops of coloured water are dropped on a small sample of each fabric. Observations are made to see if the water is absorbed immediately or in 5 minutes or in 10 minutes. A stopwatch is used to measure the time taken. Those in the ‘immediate’ section are categorized as absorbent and those in the ’10 minutes’ section are categorized as water-resistant. Those in the middle are considered for elimination. 2. Absorbency – Vertical Wicking A vertical wicking test is performed, where each material is cut into strips of 15cmx1cm. They are held in tension and suspended in air with approximately 1cm submerged in dyed water. The samples are submerged for 5 minutes so water is drawn up the sample. Progression of water on the sample is measured with a ruler afterwards. After these two tests, an elimination round will take place. 3. Absorbency – Absorption Ratios A test to determine the absorbtion ratio is carried out to test the absorbancy of materials in more detail. It is measured by calculating the ratio of the mass of the fabric with water over the mass of dry fabric: Absorption Ratio = (Wet mass)/(Dry mass). First, the samples’ dry mass is measured with a top pan balance. They are then submerged in water and taken out to allow excess water to drip. Wet mass is measured afterwards.
61
1. Nature of Fabric
RESULTS AND MIXES CHOSEN
Before testing
5 minutes
Immediately
10 minutes
Figure 14: Results at different times
Absorbent group : Gauze, wood pulp, cotton balls, cotton. Waterproof group: Cotton flannel, PUL, fleece and wool Middle group: Flax, hemp + cotton and cotton terrycloth.
2. Absorbency – Vertical Wicking
Figure 15a: Experiment Setup
Figure 15b: Using a ruler to measure length
Table 2: Results for vertical wicking, top to bottom: most absorbent to waterproof
62
This gives a general indication of where each fabric lies within the spectrum. The 0.6cm for PUL is unreliable because only the cotton terrycloth layer had absorbed water; in application, the terrycloth layer will not be able to come into contact with liquid so PUL can be considered as a waterproof fabric. 3. Absorbency – Absorption Ratios
Figure 16: Absorption ratios of raw materials
The materials are separated as those on the right do not absorb water immediately. The chart, has shown that materials on the left have a greater absorbency, with gauze pads being the most absorbent. At this point, the eliminated raw materials include: • Cotton flannel, hemp + cotton, flax – not absorbent nor waterproof • Wool – is fibre-based therefore cannot be used as a waterproof layer • Wool Fabric – high cost of $0.80 per napkin • Wood Pulp – has a lot of holes, not optimal for an absorbent layer 4. Mixes Chosen Fabrics remaining: Gauze, cotton balls, cotton, PUL and fleece (with homemade plastic). These 6 raw materials are made into the following combinations: Mix Number 1 2 3 4 5 6
Layer Identities
Gauze, Cotton, PUL Gauze, Cotton B, PUL Gauze, Cotton, Plastic Gauze, Cotton B, Plastic Gauze, Cotton, Fleece Gauze, Cotton B, Fleece
Table 3: Combination List
Gauze is treated as the top layer for all combinations as it has holes for breathing and the strongest absorbency. The second material is the absorbent core, and the third material is treated as the waterproof bottom layer. The combinations will go through further characterisation to identify the best combination. Figure 17: Sanitary napkin combinations
63
CHARACTERISATION METHODS OF MIXES Microscopic Images Microscopic images of the remaining fabrics will be taken from Nikon Eclipse E600 at 50x magnification. A detailed view of the fabric’s structure will be obtained. 1. Absorbency Absorbency is measured by calculating the ratio of the mass of the fabric with water over the mass of dry fabric (Absorption Ratio = Wet mass/Dry mass). A retention ratio is also measured where a weight of 70N is placed on top of the material, then the mass is measured again. First, absorption and retention ratios are measured for the mixes. Secondly, absorption ratios are compared between the mix and the raw materials used for that mix. 2. Comfort Comfort is measured by performing the cantilever test, a standard test method that measures stiffness by employing the principle of cantilever bending of the fabric under its own weight [12]. A piece of fabric with dimensions 6 inch x 1 inch is placed on a horizontal surface and is moved slowly by hand with the movable slide on top. The fabric should keep moving until the edge touches the bent surface that is at an angle of 41.5º. Make sure to not change the initial position of the fabric. Read the measurement on the ruler to measure how much fabric is bent.
Figure 18: Diagram showing the principle of the cantilever test
Ideally, a machine for this purpose should be used e.g. Shirley Stiffness Tester [13]; however such machine was unable to be used in this study, therefore a simplified version is recreated using a table and cardboard. The bending length is measured for the raw materials first for both weft and warp (if applicable), then a decision of whether to use warp or weft of each fabric in each combination is made. After that, the bending length is measured for the combinations and compared to the initial test. The test is performed at room temperature with a humidity of around 50%-60%. 3. Durability Durability is measured through a tensile test performed for the combinations using the Tinius Olsen H5KS Benchtop Tester. Force-extension graphs will be produced for each combination. The largest load will be recorded and the Young modulus measured. The Young modulus gives an indication of the material’s stiffness and flexibility, which can also relate to the comfort aspect. This is to determine the materials’ resistance to force, in response to the movement when being worn. 4. Drying time It is measured for each raw material excluding homemade plastic, as it cannot be submerged in water. All fabrics were submerged in water and squeezed until no more water drips, then dried on a flat surface. Drying times are determined by touch. The fabrics were checked every 5 minutes.
64
RESULTS
Gauze
PUL Fabric (top)
Microscopic Images Cotton Fabric Cotton Balls
PUL Fabric (bottom)
Homemade Plastic
Fleece (top)
Fleece (bottom)
Figure 19: 50x magnification of raw materials. PUL and Fleece’s images both sides of the fabric.
A detailed view of the structure of each raw material can be seen using the images above. The images show how durability is dependent on how it is weaved. Most fabrics above have a weave structure, so they are stronger than the non-woven and amorphous materials. Structures: Gauze – weave Cotton Balls – non woven Fleece – weave
Cotton Fabric PUL Fabric Homemade plastic
– weave – weave – amorphous
1. Absorbency
Figure 20: Absorption and Retention Ratios of Mixes
There is a clear trend that the retention ratio is smaller, which is reasonable as the force squeezes water out. Surprisingly, mixes 3 and 4 (with plastic) have the smallest difference between their ratios although being the mixes with the smallest absorption ratios. Mixes 5 and 6 (with fleece) have the highest ratios.
65
Figure 21: Comparison of absorption ratios between raw materials and mixes
In mixes 3 and 4, the ratio of the mixes is significantly lower than its top and core’s ratios; whilst the ratios of the mix for 5 and 6 is higher than the core. However, there is no set pattern for mixes 1 and 2 and the ratio of the mix fluctuates between. Overall, the plastic mixes have the lowest absorption ratios but best at retention; in contrast, the fleece mixes have the highest absorption ratios; the PUL mixes vary in terms of absorption and retention. 2. Comfort The figure on the side shows the simplified machine setup: Within the raw materials, only cotton balls and plastic have no direction in terms of the woven structure (weft and warp). A bar chart is generated below, the greater the bending length, the greater the stiffness. A pattern of the warp direction being stiffer is seen. In the 3 absorbents, cotton balls are the stiffest; in the 3 waterproof materials, the homemade plastic is the stiffest (when the weft direction for others are used). Therefore, the weft direction is going to be used in the napkin as it is less stiff and more flexible, which would ideally result in more comfortable wear. The only exception is gauze; due to its weft direction being too weak and can easily break even when handling with bare hands, hence the warp direction is used.
66
Figure 22: Simplified bending length test
Figure 24: Bending Length of Combinations
*Combinations used: Gauze – Warp, Cotton – Weft, Fleece – Weft, PUL – Weft (where applicable) The chart above shows the bending length of the combinations, along with the bending length of its different layers. The bending length of the combination seems to always be the largest out of all the components, which is reasonable as it combines different materials; the only combination that does not follow this trend is 4. Surprisingly the homemade plastic combinations have the lowest bending length when plastic has the highest bending length out of the 3 waterproof materials. 3. Durability Tensile Test The six combinations went through a tensile test to determine their durability; the largest load is read from the machine and used to determine the Young modulus. The figures and tables below summarize the results collected and the calculations carried out. The test is stopped when one of the layers breaks. The mixes that can endure the largest load are 1 and 5 with both exceeding 200N, hence both the highest Young modulus. Surprisingly, the mix is the lowest endurance to force is mix 6 that also include fleece as its waterproof layer. For mixes 1 and 2 with PUL as the bottom layer, it is seen that the one with cotton fabric has a higher durability (236.93N and 36.75N) and Young modulus (21.62MPa and 3.22MPa). This can be due to the cotton cloth’s weave structure being stronger than the cotton ball’s structure. This can also be applied to mixes 5 and 6 with fleece as its bottom layer with cotton fabric having the larger load (245.58N and 30.88N) and higher modulus (31.94MPa and 1.93MPa). However, for mixes 3 and 4 with homemade plastic as its bottom layer, the absorbent layer does not seem to make a difference, as it doesn’t follow the trend mentioned above. Instead, the homemade plastic is the weakest within the mix because it is first to break every time during each run. The largest load for both is 43.3N (mix 3) and 53.9N (mix 4), which happens to exceed both the largest loads of mix 2 and mix 6 that use cotton balls as the absorbent layer. The graph for mix 3 appears to be very uneven due to the plastic’s thickness being uneven at different points. Overall, there is a trend that using cotton fabric over cotton balls increases the durability of the mix, but this does not apply to mixes with homemade plastic as the bottom layer, rather it doesn’t seem to make a significant difference. Figure 25: A sample going through
67
a tensile test with the Benchtop Tester (provided by CUHK)
68
69
70
4. Drying Time Out of all raw materials excluding plastic, cotton balls take the most time to dry. Surprisingly, both cotton and gauze dry faster than fleece and PUL, with cotton cloth taking the shortest time. In addition, cotton balls undergo irreversible deformation after drying. They harden and stiffen, making them unlikely to be reused.
Figure 32: Drying time of Raw Materials
DISCUSSION & CONCLUSION
In this study, the possibility of creating a low cost sanitary napkin for rural China has been tested with a range of different aims, as well as identifying biodegradable and reusable alternatives for the plastic found in napkins. A number of raw materials were selected that perform generally better in terms of absorbency, these mixes are then characterised in terms of absorbency, durability, comfort and drying time. The homemade plastic’s preparation is generally simple with materials easy to find; a large quantity can also be produced at the same time whilst being very flexible. The major downside is that it weighs significantly more than the other fabrics. It has performed well for its exposure to air and fake blood, but not as optimal when submerged in water. However this is not an obstacle as the top and middle layer (absorbents) will act as a barrier preventing excessive liquid from reaching the waterproof layer. The agar plate tests had also proved that the homemade plastic is indeed biodegradable; albeit quite prone to bacteria, it is still a viable solution for sustainability regarding sanitary napkins. The preliminary tests of the raw materials were useful for determining the nature of each fabric (absorbent or waterproof) and selecting more absorbant materials. The simple water-drop test had a great impact in differentiating this. The vertical wicking test and absorption ratios were helpful for further measuring the absorbency of each raw material. The results collected from different methods can also be used to compare the absorbencies of each fabric, to improve the validity. The absorbency tests using ratios had demonstrated the absorbency and retention of the mixes clearly, with homemade plastic mixes having the lowest absorbency. This is surprising as all mixes have similar top and absorbent layers; this can be due to the fact that water is not absorbed evenly throughout the material, or the weight used in retention is not equally distributed in the sample. Although the comfort tests were performed without using a professional machine, it gives a good sense of the bending stiffness of each fabric and mix. The only error to be mentioned is the friction between some fabrics and the table, making it difficult to slide the piece of fabric across the table, which reduces the validity of the readings.
71
The durability tests using a Benchtop Tester had given a clear indication of the tensile strength of each mix. However the results for the plastic mixes can vary due to the inconsistent thickness of the plastic. It is seen that cotton fabric mixes are more durable than cotton ball mixes. Drying times are important, but touching is a subjective approach that can result in biased results. Moreover, the varied surface area across each may lead to different results. One limitation is the small number of fabrics collected; there was only one choice for the top layer (gauze) and only two choices for the core layer (cotton fabric and cotton balls), hence the limited possibilities of this sanitary napkin. Another limitation is the high price of the glycerol used in the making of plastic for 2 of the mixes; 1ml costed $0.176HKD when you need 15ml to create enough plastic for a single napkin ($2.64HKD). This outweighs the advantage of being able to reuse the plastic multiple times. The other 4 mixes are all under $1HKD with the potential of being reused; but PUL and Fleece are both non-biodegradable, hence it is used in this study for the purpose of being reused, which can also reduce cost and help the environment. Though no mixes performed outstandingly, some points can be made from the tests: • Cotton fabric is more durable than cotton balls as it is more flexible and it dries faster; might be more applicable as a middle layer. • Gauze is the best surface material for top layer as it has holes for breathability. • Homemade plastic is the most susceptible to fracture out of all waterproof materials. To conclude, the biodegradable plastic can have other uses besides sanitary napkins, and characterisation methods can be performed on more fabrics, combined with user feedback to give more reliable results as qualities (like comfort) is difficult to measure. This study can be a standard to test materials used for menstrual products.
BIBLIOGRAPHY
1. Shah, Shobha P, et al. “Improving Quality of Life with New Menstrual Hygiene Practices among Adoles cent Tribal Girls in Rural Gujarat, India.” Reproductive Health Matters, vol. 21, no. 41, 2013, pp. 205–213. JSTOR, JSTOR, www.jstor.org/stable/43288976. 2. Suneela G, Nandini D and Ragini S. Socio-sultural Aspects of Menstruation in an Urban Slum in Delhi, India. Reprod. Health Matters. 2001; 9 (17): 16-25 3. Vostral, S. L. (2008). Under Wraps: A History of Menstrual Hygiene Technology. Lanham, MD: Lexington Books. 4. “Plastic Based Sanitary Pads Are Not Only Harmful to the Environment but Also Your Body.” Https://Www.hindustantimes.com/, Hindustan Times, 3 Apr. 2018, www.hindustantimes.com/fitness/plastic-based-sanitary-pads-are-not-only-harmful-to-the-environment-but-also-your-body/story-Kk4wrI6QOyJCkP7bwEh0rI.html. 5. Q&A: Why water and vinegar? (4th Aug 2011), http://green-plastics.net/posts/69/qaa-why-%20water-and-vinegar/ 6. Truents. “Physical Properties and Characteristics of Fabrics.” Textile School, 15 Apr. 2018, www.textileschool.com/199/physical-properties-and-characteristics-of-fabrics/ 7. “Absorbency Testing.” Absorbency Testing Method AATCC TS-018 and AATCC 79, www.manufacturingsolutionscenter.org/ absorbency-testing.html 8. Kan, Chi-wai, et al. “Comparison of Test Methods for Measuring Water Absorption and Transport Test Methods of Fabrics, Measurement.” DeepDyve, Wiley Subscription Services, Inc., A Wiley Company, 1 Feb. 2017, www.deepdyve.com/lp/elsevier/ comparison-of-test-methods-for-measuring-water-absorption-and-F8x5uNlaPL. 9. Beskisiz, E., Ucar, N., & Demir, A. (2009). The effects of super absorbent fibers on the washing, dry cleaning and drying behavior of knitted fabrics.Textile Research Journal, 79(16), 1459-1466. 10. Song, Guowen. Improving Comfort in Clothing. Woodhead, 2011. 11. Chenganmal, M, et al. “Different Techniques in Measuring Of Flexural Rigidity of Fabrics .” fibre2fashion, Texile Review, Apr. 2011, www.static.fibre2fashion.com/ArticleResources/PdfFiles/57/5670.pdf 12. Annual book of ASTM Standard, vol 07.02, 5732-95 13. “Fabric Stiffness Testing | Determination of Fabric Stiffness by Shirley Stiffness Tester.” Textile Learner: One Stop Solution for Textiles, www.textilelearner.blogspot.com/2012/02/fabric-stiffness-testing-determination.html 14. “Improve Textile durability” https://www.rubtester.com/improve-textile-durability/
72
To What Extent Do Individual Musical Elements Contribute Towards Emotional Response? Justin Chan (Year 13, Peel)
https://www.mtu.edu/magazine/research/2016/stories/emotions-technology/
INTRODUCTION One of many everyday encounters that the general population experiences is the sound of Music. Whether it is through digital media, live performances, or even as part of a religious ceremony, Music is an immutable constant in our daily lives. Music has been found in every known culture, throughout history the history of mankind, and has been constantly evolving into the Music we listen to today. As a musician myself, I have always sought an understanding of Music, and the reasons behind our deep emotional connection with it. It puzzles me, and many others who have investigated this topic, as to why the human brain can make sense of auditory stimuli such as Music, and translate it into something we can experience at a personal level. I have therefore decided to write my Extended Project Qualification on this topic, and formulate a method to investigate the emotional effects of Music on the human brain. I will start with a section introducing the topic and knowledge necessary to address the mystery of Music
LITERATURE REVIEW Music is widely known as an art form which has a powerful grip over our emotional state during the experience, and sometimes long after. Although we can see the acceptance of musically invoked emotional response, through recent papers such as A. Lamont & T. Eerola (2011) and examples I will make later on, it is indulged in by a large majority of the general population, and its effects are evident. The potential effect of Music on human emotion is so substantial, that it can be, and has been used to symbolize ideas of emotion without words. For example, hearing the tune of Mendelssohn’s ‘Wedding March’ could elicit very strong emotions of romance or love, while Chopin’s ‘Funeral March’ could elicit emotions at the other end of the spectrum. There is no doubt that Music has an effect on our emotions - the real question is why. Before we begin, I would like to address several possibilities that could explain the emotional connection with Music, and explain why they can be disproved. “The lyrics of a piece of music create an emotional connection” According to many studies such as E. Bigand (2005), using “classical” pieces of Music with no vocal part to investigate emotional response, no participants reported a completely neutral emotional experience through73
out the experiment. Although this example does not indicate the effectiveness of vocals in eliciting an emotional response, it does demonstrate that Music elicits emotional responses without the need for vocals. “We have associated music with emotions, and have learnt to ‘feel’ music” This statement is the “nurture, not nature” argument for an emotional response towards Music, and theorizes that we have learnt to have an emotional response for musical cues through experience (e.g. learning to feel happy at the cue of “Happy Birthday”). Again, according to many studies such as Madsen, C.K. (1997), the participants have never been exposed to the extracts played prior to the experiment, and all show varying degrees of emotional response towards them. Now that we have addressed these theories, it is possible to further investigate the reasons for an emotional response towards Music. As a Music student and musician myself, I have had a wide range of experience in performing and writing Music. I have always wondered why Music can be as powerful as it is, and how we communicate feelings through sounds. However, E. Bigand & B. Poulin-Charomat (2006) state that “the ability to analyse surface patterns of pitch, attack duration, time... would not have any implication for musical experience”. This is further supported by E. Bigand (2005) “emotional responses to music were only weakly influenced by musical expertise”. Since the emotional difference in experiencing Music between ‘musician’ and ‘non-musician’ is so small, it would be meaningless to exclude one group or another, as both groups ‘understand’ musical stimuli to a similar, almost indifferentiable level. Returning to our question (Why does music elicit an emotional response?), I have decided to work on this problem with a reverse-engineering approach. Music is made up of 7 basic elements: Sonority - The use of performing forces (e.g. instruments) in a piece of Music Structure - The arrangement of sections along with the overall form of the piece Melody - The sequence of single notes typically in a tune Texture - The arrangement and interaction of individual layers with one another Harmony - The chordal progression of the piece, typically accompanying a melody Tonality - The tonal centre of the piece of Music Rhythm / Tempo / Metre - The pattern of note values, regardless of pitch, in terms of timing and repetition of patterns It is evident that all of these elements and their combined use in musical compositions are responsible for eliciting different emotional responses in the listener. I would like to point out that Music could be split up into more elements (up to 12), and it has been a long known trait of Music to be built upon these elements. All musical analysis is based off, more or less, these 7 elements as can be seen from examples such as M. Pagliaro (2016). To select the elements that would be most important to investigate, I believe it is best to identify the elements that feature most prominently and consistently in Music since the beginning of the Baroque period (approximately 1600). In the end, I decided to study the most basic elements, featuring in the large majority of Music.
METHODOLOGY Aim: To investigate the effect of individual musical elements on emotional response Independent Variables: - Melody - Harmony - Tonality (Major or Minor) 74
To first investigate the effect of Melody and Harmony independently, I have decided to use one piece of music, and isolate the melody from the harmony into two different tracks. To isolate melody from harmony, I have simply taken a piece of music and converted it into a MIDI file, editable by digital programs. I then separated the one line of notes associated with the melody from the chord progression (usually in a lower range than the melody). To then investigate the effect of Tonality, I have decided to use two separate pieces of music, one in a major key and the other in a minor key, each separated into their ‘Melody / Harmony’ tracks, thus forming a grand total of two groups of two tracks each.
Group I: Major Extract - Track I: Melody - Track II: Harmony Group II: Minor Extract - Track III: Melody - Track IV: Harmony
Choice of pieces: - Mozart - Piano Sonata in D Major, KV 311 - Chopin - Ballade in C Minor, Op. 23 No. 1
https://www.classicfm.com/composers/mozart/pictures/mozarts-15-birthday-facts/
I have chosen the Major Extract of Group I from Mozart’s Piano Sonata and the Minor Extract of Group II from Chopin’s Ballade, both pieces of music I am familiar with. I have chosen extracts of equal length (30 seconds) which both begin after a cadence point and end on a cadence point. The choice of Mozart is due to the complex character of his Music, expressed in simplistic language, making his Music easy to follow without lacking in emotional weight. There have been many words said about Mozart. However, I believe the most concise description of the character of his Music comes from the quote “Mozart’s music is so pure and beautiful that I see it as a reflection of the inner beauty of the universe” by Albert Einstein The choice of Chopin is due to the virtuosic nature of his work, with the aim of communicating strong emotional ideas to his audience. I have chosen a relatively simple extract from his Ballade, to allow ease in listening to the work during the investigation. Again, there have been many descriptions of the emotional beauty of Chopin’s work. “After playing Chopin, I feel as if I had been weeping over sins that I had never committed and mourning over tragedies that were not my own” is a quote from poet Oscar Wilde, which I believe gives a very clear idea about the emotional weight of Chopin’s pieces. Dependent Variable: - Degree of emotional response After lots of trial and error with different scaling systems, I have decided to go with a simple 11-point linear discrete scale (0-10) as opposed to a continuous dual-axis scale (CRDI) as formulated by J.A. Russel (1980) and used in various studies referred to in the prior section. In my previous trials, it has been proven difficult, inefficient, and even unorthodox, to use a continuous dual-axis scale. This was due to the complexity of definition towards each Dependent Variable (Arousal / Valence combination), and the incredibly slow and arduous procedure of data collection and processing. I will elaborate on the data collection process in later sections. 75
Control Variables - Sonority - Texture - Length Sonority: It is necessary to keep the performing forces of the extracts consistent throughout. To address this Control Variable, I have decided to on the following - Both pieces / All 4 extracts are played by the piano - The average loudness of the piece should be at -6dB - Dynamics (and rhythmic expression) should be retained Texture: It is necessary to have both extracts in the same (Melody Dominated) Homophonic texture to allow the Melody to be separated from the Harmony without modification of the original track. - Both pieces must be homophonic and have a clear melody and accompaniment - There must not be any motif or melodic engagement within the accompaniment Length: The extracts must be of (approximate) equal length, to control the degree of emotional connection between the piece and the listener. Referencing E. Bigand (2005), this is a valid factor that could render the results unreliable and must be controlled - Both pieces / All 4 extracts must be the same length - Both pieces must feature a similar structure I have found it unnecessary to control the participants of the investigation, and to separate musician from non-musician. As referenced in the previous section, it has been found that there is little to no difference regarding emotional experience towards Music between experienced or non-experienced listeners. Sample I had originally set out to obtain data from a sample consisting of all ages, genders, cultural background, and ethnicity. However, due to limitations, I only had access to a small age group of 15-19 years of age, predominantly of mixed ethnic background. However, there was a good variation in cultural background and gender. There are definitely limitations to the possible conclusions to be drawn from the data extracted from such a small sample. Nevertheless, I believe that it could be representative of a vast majority of teenagers due to the large variation in musical, cultural, and ethnic background. Data Collection and Distribution I have decided on a digital form to collect data after many trials with different data collection methods. At first, I wanted to use a 2 Dimensional CRDI scale (as mentioned above), to investigate Emotional Responses. The CRDI scale (Continuous Response Digital Interface) reports data through two axes. The x-axis, named ‘Valence’, is used to describe the type of emotion felt (e.g. Sorrow at x = -1, Joy at x = +1), while the y-axis, named ‘Arousal’, is used to measure the degree of emotional response. I did not have access to a digital interface that could provide a scaling method as such, hence I had used pen and paper on a similar 2D scale to retrieve data points. The graph can be seen in the Appendix. There were a few problems with this type of data collection. My reasons for the change in scaling type are as follows:
76
Original CRDI Scale, on paper: - No relative point upon the first extract - Very time consuming due to personal nature - 2D Scale is too complicated to achieve reliable results - Paper processing - Definitions for each axis is too difficult to understand New Method, digital form online: - The measurement is taken relative to the first extract - Extracts are not named in any particular order - New ‘synthesized’ audio files to be used - Discrete 1D Scale - Comprehensible scale definitions - Very efficient in terms of data collection and processing Through simplifying the scaling, I have cut down on time necessary per data point acquired, and more reliable comparisons between Independent Variables. The digital form consists of a few sections to obtain the most reliable results: - Title: The Nature of Emotional Response Towards Music - Introduction: What is being investigated? - Instructions: Clarify what is needed to be done - Definitions: Define common terms needed to understand the questions - Question: Group I, Track I - Emotional Intensity (11-point discrete scale) - Question: Group I, Track II - Emotional Intensity (11-point discrete scale) - Question: Group II, Track III - Emotional Intensity (11-point discrete scale) - Question: Group II, Track IV - Emotional Intensity (11-point discrete scale) Since this was done using a digital form, I have distributed it through email and collected the data using Google Forms and I processed the data using Google Sheets. I will elaborate on the data processing in the following section. Scale definitions and the form itself can be found in the Appendix. The definitions for each term were written by myself and aimed to be as concise and clear as possible, whilst relating to the survey. Additional Notes It was a little challenging to have my study distributed to potential participants. It was understandable, however, that it was necessary to ascertain the potential psychological effect of the study would not be harmful, and that the contents of the study would not overstep ethical boundaries and/or rules within the School. It was made known to me that Soundcloud, the streaming service used to play the extracts, streamed explicit tracks after the extract had finished playing. This was obviously an ethical issue and would have interfered with the results of the experiment. As a result of the check, I took measures to keep further tracks from playing automatically. The form was sent to all participants without prior notification. Their participation was not in any way mandatory, so the survey was completed of their own free will. Furthermore, I did not provide any reward for completing the form, and did not collect any personal data. The ethics of this social experiment was discussed with my supervisor, and I collected the minimal amount of data in order to preserve privacy and to ensure that I did not overstep any ethical boundaries.
77
PRELIMINARY TRIALS Trial I At first, I had titled my project “What causes emotional response in music?” with the aim to investigate, using scientific techniques, the reason behind our emotional connection towards auditory cues. In my initial ideas, I had assumed that Harmony was the reason behind emotional response towards Music. In other words, Music without harmony or harmonic interest would evoke little to no emotional response. I had assumed this due to my personal experience with Music, and my emotional ‘intimacy’ with harmony. I have written many pieces of Music in the past and learnt to improvise through band performances. During all of these experiences, it seemed evident to me that harmony and chordal progression within a diatonic key were most responsible for emotional connection. I also believe that modulation and its heightened effects on emotional response are solely due to the harmonic (usually subtly chromatic) movement required to bridge the two keys, creating a very effective canvas for emotional communication. Based on this assumption, it was my Independent Variable would be the individual elements within harmony, such as Major / Minor Triads, fragments of common progressions, frequency relations, etc. The dependent variable would be degree of emotional response and through speaking with my project supervisor I decided to use the CRDI scaling system, due to its continuous nature. I had played and recorded, then used digital mixing techniques to produce four tracks of tonally different triads with the same tonic, two of which were synthesized and two of which were played acoustically. I then created a graph of Arousal against Valence, both defined, for the participants to record their emotional response. This was intended to imitate the CRDI method without the digital hardware necessary. Five participants tried this method. The data collection process was incredibly slow, and explaining the axis was an arduous task. The tracks were too basic and therefore yielded many zero values in data collection. It was at this point I realised that there were flaws in my theory and I had to revise it. Trial II In this trial, I had revised my assumption about harmony and deduced through a number of papers that emotional response towards Music must be a combination of all elements that make up the musical work, and not just the element of harmony. I therefore had to broaden my experiment to multiple elements within a piece of Music, one of which would be harmony. I had also included the variable of synthesized vs. acoustically played triads in Trial I. However, this second trial would be done using an extract of Music, not just one triadic chord. The aim of this trial is equivalent to my final aim (To investigate the effect of individual musical elements on emotional response). I therefore used the same independent variables and the same dependent variables as my final investigation, with the exception of tonality. My data collection method, however, was not a digital form, but rather the same analogue CR(D)I scaling system used in Trial I. This form of data collection again proved difficult, and after a similar number of participant trials, I realised there were still flaws in my investigation, some the same as in Trial I. Trial III Differences from the previous trial - The removal of all acoustic tracks due to recording ability and clarity - Using two pieces of four extracts to investigate an additional variable of tonality - Data collection and scaling system The switch to the discrete scales was a product of the difficulty in understanding the scaling definitions, the variance between the results due to no relative point, and the difficulty in finding a trend between my variables. It was because of these reasons that I had switched to the scaling system used in my final methodology. 78
I also decided to use only synthesized tracks, played by hand. To do this, I recorded myself playing an extract of the two pieces and recording it through a MIDI interface and processed the track through Logic 9, like in my previous trials. The reason for this was due to quality control of the extracts to provide ease of listening and response. The results will be processed in such a way that the difference between emotional response within the types of musical extracts will represent the dominant element and its ‘degree’ of dominance/importance in evoking emotions in the listener. The standard deviation will also be used in processing data. This will show the variation in responses, and therefore suggest a degree of common response between participants. This is important when using the results to represent the general population. It was completely due to feedback from my project advisor and trial participants that I had made the changes leading to my final experiment. Although the original project idea was my own, the evolution of the project and methodology towards its final state was mostly due to the reflection, reactions, and criticism given to me by everyone involved in the project, be it within the project, or one-time participants.
RESULTS AND ANALYSIS
For my results, I have obtained data points in terms of the Dependent Variable of Emotional Response Intensity with a scaling of 0 (no emotional response) - 10 (prominent emotional response), for each Independent Variable. Full results are included in the Appendix. These are the data averages obtained through the experiment consisting of 33 participants. The Harmony element for Major and Minor groups have an average of 1.6 IP and 2.0 IP (Intensity Points) higher than the Melody element respectively. Surprisingly, all extracts had similar average Standard Deviation of around 2.2 - 2.3 IP, which shows a little bit of overlap between Melody and Harmony data points. However, at this point, this data clearly suggest that Harmony have a noticeably greater effect on Emotional Response for the large majority of participants. Now addressing the difference in both groups: there is a clear increase in average Emotional Intensity for the Minor extracts than the Major extracts, which can also be observed between individual elements with varying tonality. We can easily conclude that Minor pieces of music/extracts in a minor key, evoke a stronger emotional response in listeners than Major pieces of music/extracts. However, it is important to do more trials to investigate this further, as it is uncertain if any other variables had affected this increase. As discussed with my project supervisor, I agree that the IP mean values between Harmony and Melody are not enough to show an overall difference due to the large standard deviation between participants. It was therefore advised that I carry out a Paired Samples T-Test to analyse whether or not these results were obtained by chance, or if they are truly representative of a degree of emotional responses between these elements. 79
I will carry out the test to hopefully be able to reject the null hypothesis that these results and differences between linked data points were caused by chance. Paired Samples T-Test Necessary assumptions: - That the dependent variable is continuous - That the observations are independent of one another - That the dependent variable is (approximately) normally distributed - That the dependent variable does not contain any outliers Addressing assumptions: - The dependent variable is not continuous. However, the 11-point scale is numerical and evenly spaced thus, for simplicity, I will allow this to pass. - The observations of each extract are independent of one another. - The dependent variable is distributed with a skewed bell-shaped curve, with preference to a higher harmony IP level. However, these results are only for a sample, and the population could experience emotional response in a more normally distributed pattern, and therefore I will use this test as a model to represent this data - The dependent variable does not contain a significant amount of outliers.
I have therefore decided to continue investigating this using the Paired-Samples T-Test. T-Test No. 1 (Major Extracts) For differences Harmony (Major) to Melody (Major) ΣD = Sum of Differences (H-M) : 52 2 ΣD = Sum of Squared Differences (H-M): 288 (ΣD)2 = Square of Sum of Differences : 2704 N = Total Subjects : 33 Participants t value (calculated) DF Degrees of Freedom α Significance Level p value (calculated)
= 3.6 = 32 = 0.05 = 0.000531 (One-tailed)
p < α (by about 100 times) 80
Conclusion: Very statistically significant, therefore reject H0 null hypothesis One-tailed statistically significant, therefore Harmony evokes a greater emotional response than Melody. T-Test No. 2 (Minor Extracts) For differences Harmony (Major) to Melody (Major) ΣD = Sum of Differences (H-M) : 64 2 ΣD = Sum of Squared Differences (H-M): 286 2 (ΣD) = Square of Sum of Differences : 4096 N = Total Subjects : 33 Participants t value (calculated) DF Degrees of Freedom α Significance Level p value (calculated)
= 5.0 = 32 = 0.05 = 0.00000994 (One-tailed)
p < α (by about 5000 times) Conclusion: Very statistically significant, therefore reject H0 null hypothesis One-tailed statistically significant, therefore Harmony evokes a greater emotional response than Melody Test 2 is much more statistically significant than Test 1 although both are very far from the Critical Significance Level (0.05).
EVALUATION As can be seen, we have obtained a very simple yet effective set of results that show a statistically significant difference between emotional response towards Melody and Harmony. As a by-product of this methodology, we can also see that Minor passages are seemingly more effective than Major passages when evoking emotional response. Although there was a rather small sample, I believe that these results could very well be representative of the general population - especially with the supportive p-values obtained by the various Paired T-Tests. One possible interfering variable I would like to address lies in the Minor extracts, which I played slightly rubato (with freedom of tempo), and some dynamic contrast. On the other hand, the Major extract was played stylistically, with terraced (sudden change in) dynamics and a strict tempo. I believe that the increase in emotional response towards the Minor extracts could be due to the tonality of the piece, but also the rubato and gradual dynamics used to play this extract. It is because of this that I will not draw the conclusion that Minor tonality evokes stronger emotions than Major tonality. I believe that this investigation was successful, as it yielded conclusive results despite the limitations that I had to work with. Although this investigation did not lead to very detailed conclusions about the effects of Music on human emotion, it provides effective ideas as to how we can break down Music into its constituent elements and conduct a simple experiment, which could lead to further work. It is evident that there is a lot of work to be done when it comes to investigating Music. There are about 7 elements, which can be broken down further. It would be feasible to replicate this experiment with regards to other variables, and to then hone in on the elements yielding highest emotional response. From there, it would be most possible to investigate the reasons behind their emotional effect e.g using frequency relationships, theoretical tonal keys, etc. 81
Improvements I would have liked to make to my experiment encompass: increasing the sample size and expanding its range to include more age, culture, and background variation, as well as using more extracts (with the same variables) to provide more data to work with. Although the Paired T-Tests I have carried out conclude that I can reject the null hypothesis (difference happening by chance), I still feel that there was insufficient data to work with. To reinforce these results, given time and resources, I would replicate this investigation on a much larger scale, and process the obtained data with a similar method. Due to the nature of this investigation, further work would depend on results obtained from future experiments and investigations. If there is a further difference between Musical elements, and there are elements that are significantly more effective at evoking emotional response than others, the best approach would be to hone in on more specific sub-elements. This could even be followed by ‘disassembly’ of the sub-element, to understand why certain auditory stimuli affect our brains through an emotional response. If there are no further differences found, the method must be revised and a reason must be found for the loss of effect.
CONCLUSION From this experiment, we can conclude that Harmony is more effective than Melody at evoking an emotional response. This means that the chordal and harmonic progressions that are usually played by the accompanying parts are usually responsible for the feeling of the piece. Although harmony is dominant when it comes to emotional responses, melody is also responsible for a portion of the piece’s emotional effect. It is the combination of both elements that create a piece of Music, and thus it is the combination of both harmony and melody that contribute to the overall feeling of the piece. Although the Minor key evoked stronger emotions than the Major key extracts, it would be a reach to conclude that a piece rooted in a Minor key would be any more effective than a piece rooted in a Major key. There were uncontrolled variables between extracts that do not allow this conclusion to be drawn from this experiment. Overall, this was a successful experiment, and the aim of the experiment has also been reached accordingly with a solid set of expected results. Given time and resources, I would like to continue the study for all of the musical elements to further analyse our innate emotional connection with Music.
BIBLIOGRAPHY Date last accessed for all papers: 31/03/2019 | Date last accessed for all sites: 22/03/2019 Bigand, E., & Poulin-Charronnat, B. (2006). Are we “experienced listeners”? A review of the musical capacities that do not depend on formal musical training. Cognition, 100(1), 100-130. Bigand, E., Vieillard, S., Madurell, F., Marozeau, J., & Dacquet, A. (2005). Multidimensional scaling of emotional responses to music: The effect of musical expertise and of the duration of the excerpts. Cognition & Emotion, 19(8), 1113-1139. Lamont, Alexandra & Eerola, Tuomas. (2011). Music and emotion. Musicae Scientiae. 15. 1-7. 10.1177/1029864911403366. Madsen, C. K. (1997). Emotional response to music. Psychomusicology: A Journal of Research in Music Cognition, 16(1-2), 59-67. Michael J. Pagliaro. (2016). Basic Elements of Music: A Primer for Musicians, Music Teachers, and Students. Rowman & Littlefield Publishers, Inc. Russell, J. A. (1980). A circumplex model of affect. Journal of personality and social psychology, 39(6), 1161. www.statisticshowto.datasciencecentral.com www.statisticssolutions.com www.statistics.laerd.com www.statsdirect.com
82
APPENDICES
Appendix A - MIDI Manipulation & Extracts
Below are the methods used to process the extracts used in the experiment and the details of the separation process. In prior trials, I had recorded the extract acoustically into Logic Pro 9 and used the program to clean up (EQ), compress, and make similar to the synthesized extract.
In the later trials, I have just used a MIDI connection to record the extract directly into the program, and later performed basic EQ, compression, and mastering to create very similar extracts in terms of timbre, relative volume, etc.
These are all tools used to â&#x20AC;&#x153;clean upâ&#x20AC;? the recordings of extracts, and manipulate the audio data into a similar observable format. As can be seen, iZotope Ozone 8 was used in processing these tracks.
83
Appendix B - Scaling Systems & Changes
This addresses the changes in the scaling systems I have used in all generations of the trials. This includes definitions of scales and some example data. Previous trials of my investigations have used an analogue graphical scale that I have handwritten.
The definitions of the scales have been included on the sheet and I had aimed to collect continuous data for both variables. The final experiment used a much simpler scale with only one variable. However, the scale definitions are written in more detail than before but are limited to a discrete 11-point scale as shown below.
84
Appendix C - Data Collection
As can be seen above, data collection for the previous trials were messy and inefficient. The data for the current trial were all condensed into the Google Formâ&#x20AC;&#x2122;s Data Collection System. This allows for quick data processing and manipulation.
Google Forms also allows direct conversion of data to an Excel spreadsheet. This allowed me to carry out quick and comparative calculations between sets of results.
I have used meta-chart.com to process the desired data points through bar charts or graphical methods. Examples of these charts can be seen in the above sections.
85
Appendix D - Data & Results
The raw data obtained through the analogue trials were difficult to comprehend for both me and the participants. There was little to no correlation found and values were difficult to obtain. The final data set showed very strong patterns. The raw data is shown below:
86
Making a Successful Railgun Yin En Tan (Year 13, Sun)
https://www.popsci.com/technology/article/2010-11/nasa-engineerspropose-combining-rail-gun-and-scramjet-fire-spacecraft-orbit
INTRODUCTION I believe that the future of mankind lies in the stars. Although most likely not within my lifetime, scarcity will become such a pressing issue that man must venture into the stars to find the resources to sustain our ever-growing population. Currently, the only method to send anything into space is by using the energy from chemical combustion to fuel rockets. However, this method carries significant risks due to the highly volatile nature of the fuel. Furthermore, each SLS rocket from NASA costs $450 million [1] to build. It would be unsustainable for people to be sent into space on a regular basis with these costs, and only highly trained individuals would be allowed to minimise the risk of a failure. Yet through the use of a railgun, a spacecraft can be accelerated to thousands of km/h in a matter of seconds using the power of electricity. This will not only be significantly cheaper than conventional rockets, but much safer due to the lowered risk of explosion. Furthermore, on planets and moons with no atmosphere, a railgun can function much better than traditional chemical propellants which may require oxygen to combust. There are also many other ways this technology can be used, such as: weaponizing this technology, intercontinental cargo transport, interplanetary travel and much more. Although some forms of this technology may be less desirable than others, there is no doubt that railguns will play a major part in mankind’s future. It was this amount of potential that such a simple device could have that drew me to investigate this particular topic.
https://www.videoblocks.com/video/futuristic-super-space-shuttle-1-fu5hqrx
PROJECT OVERVIEW The primary aim of this project is to build a railgun capable of firing projectiles, and if that succeeds, to modify it to be able to fire a model space shuttle. A railgun that would qualify as being successful if it can fire projectiles at speeds comparable to handheld firearms. Although producing a firearm is not the aim of this project, since the two are very similar in that they fire projectiles, I have decided to use the firearm as the benchmark for success. There are significant safety hazards involved with this investigation, such as electrocution and injury from projectiles. Therefore, appropriate safety measures must be taken to ensure that this investigation is conducted safely. Bray, Nancy. “Space Shuttle and International Space Station.” NASA, NASA, 28 Apr. 2015, www.nasa.gov/centers/kennedy/ about/information/shuttle_faq.html#10.
1
87
Below, I have tabulated the risks and how I have dealt with them. Safety Hazard Safety Measures Taken Wearing latex gloves when dealing with the railgun itself Standing on a platform to make sure that I am not acting as the ground for the circuit Electrocution Not using voltages above 200V Doing any necessary calculations before charging the circuit so that I know what levels of current to expect Wear safety goggles Projectiles Point the muzzle of the railgun away from people
THEORY BEHIND THE RAILGUN The driving force behind the projectile in a railgun is magnetic propulsion due to the Lorentz force. When a current is passed through a wire, a magnetic field is formed around it. If there is a current and a magnetic field, then a force will be exerted on the wire itself. Using the right-hand rule, I can determine the direction of this force (Fig 1). The projectile that completes the circuit is the wire. The railgun can work with both AC and DC current. It works for AC as the direction of the magnetic field would also change when the current changes. Therefore, the direction of the force is unaffected. Below is a simple circuit diagram of a railgun. As you can see, it is relatively simple, only involving 4 different symbols. The rails, the capacitors wired in parallel, the power supply and the switches. By wiring the capacitors in parallel, we can find out the total capacitance by simply adding up all the capacitances together. So, in the case of this diagram, total capacitance would be 420ÂľF. Fig 3 will provide a closer look at how the rails propels the projectile forward.
Figure 2: Circuit Diagram of a Railgun
Figure 3: Close up of rails
MODEL RAILGUN To test the basic theory of the railgun, I built a small model railgun which can demonstrate the basics of the principles stated above. I used the resources available in my physics class to make this. The two rails are connected to a 9V battery using tape to secure them in place. The permanent magnets between the rails are there to provide the magnetic field since the projectile itself cannot produce a strong field. Note that when using permanent magnets, the direction of the current does matter as the direction of the magnetic field cannot change, and if wired incorrectly, the projectile will travel backwards. The projectile rolled along the rails, accelerating as it travelled towards the end. 88
I did not use capacitors for this as I did not need to have large amounts of currents. The fundamental difference between capacitors and batteries is that capacitors can release all of their stored charge in a short amount of time and batteries slowly discharge over a longer period of time. This means that a high current will be seen if the capacitance of the capacitors is large enough. However, the 9V battery provides a constant 0.6mA of current which is sufficient to let the projectile roll along the rails. This model was useful for me as it allowed me to try building and testing a railgun on a smaller scale before moving on to the real one.
Connected to a 9V battery
Projectile Iron rails Permanent magnets
BUILDING THE RAILGUN During the actual construction of the railgun, I faced many types of problems which were caused mainly due to a lack of planning and tools. After careful consideration of what materials to use for my actual railgun, I have decided on the following materials: - 10x 600V 22000ÂľF capacitors o To make sure that I definitely have enough capacitance for a successful test o 600V to ensure that even when I charge the capacitors to 200V, there would be little risk of the capacitors being damaged as it would be well below what they were designed to store - 1 metre by 1cm diameter copper cylinder o Rails are not too thin as they would become very hot during acceleration test if they were o 1 metre to allow myself to cut out varying lengths of rails - 20 gauge (0.81mm diameter) bare copper wire o Insulated wires are much safer; however, the insulating case could melt during the accelera tion test when using higher voltages, so I opted for bare wires instead o I will be placing the capacitor bank inside a plastic case in order to reduce the risk of contact 600V 22000ÂľF capacitors
Copper rails sawn to various lengths
Bare copper wire
89
I ordered these materials on Taobao and had them delivered to my home. Shopping online for these was the easiest way to find exactly the materials I wanted, as well as having been quite cheap. I used copper rails since they are good conductors of electricity and copper is a relatively cheap metal. The ideal metal to use would have been tungsten as it is also a very good conductor of electricity but also has a much higher melting point than copper (1085 ℃ for copper compared to 3422℃ for tungsten). This is useful as the rails will heat up significantly due to the high current and friction of the projectiles against the rails, so tungsten rails would be able to last longer than copper rails. However, the tungsten rails were significantly more expensive than copper, and I did not think that my railgun would reach temperatures higher than 1000℃ so therefore I did not purchase these. For my projectiles, I sawed 1cm long sections from the copper rails. I wired 8 capacitors in parallel for a total of 176000µF. Initially I had thought that I would need to solder the wires on to the connection points, but I decided to just wrap the wires around twice and screw them on tightly using the attached screws. The connection looked very secure, and when I charged the capacitor bank together with a 14V power source the voltmeter showed that it was indeed outputting 14V so there was no problems with the connections here. My first issue here was how I was going to connect the wires to the rails themselves. The best option for this would be to place screws into the rails to allow for a connection point. However, to do this, I would require access to a drill, which I did not have available. I attempted to solder on the wires directly on to the copper rails. However, the solder simply came right off once it cooled down. Therefore, I decided to use tape to stick the wire on to the side of the rails. This was not ideal due to the fact that the tape might melt when exposed to high temperatures, and also the surface area in contact with the rails may not be large enough and could be uneven. Then, the next problem was to try and fix the rails on to a surface parallel to each other like in Fig 3. Once again, the best option would be to screw the rails on to the top of the plastic box that I had, but due to the same reasons outlined earlier, I could not do this. Once again, the best I could do with the available resources was to tape the rails on to the surface of the box. This was arguably one of the worst ways of fixing the rails as when firing a projectile, the rails push each other away, so the rails would easily move when held down by tape. This
Projectile
90
would mean that the projectile would lose contact with the rails (which is necessary to allow a current to flow through the projectile) and would stop accelerating. Projectile Copper rails
Low voltage power supply
Capacitor Bank Plastic Box
After completing assembly of the railgun, I moved on to charging the capcitors. I conducted an initial test charge at 14V, with 27000Ω resistors wired in series with the rails. However, nothing happened when I placed the projectiles on the rails. I thought that the high resistance in the circuit was making the current too low, so I tried again with no resistor in the circuit. Once again nothing happened to the projectile. It would simply sit there and refuse to move. I was quite surprised that this was happening since my model railgun worked very well. The ammeter was not picking up a current either, so I thought there must have been a faulty connection somewhere in the circuit. However, I ruled that out when I checked all of the wirings and made sure that everything had good contact. Therefore, I decided to calculate the force that was being exerted on the projectile, since the lack of acceleration may have been caused by there simply not being enough force. I used the formula:
To be more precise, the L should be the length of conductor perpendicular to the magnetic field through which current flows. In this case the magnetic field is produced by the projectile itself, so the conductor perpendicular to it would be the two rails. Therefore, the value of L will simply be the length of the rails. I used one set of rails that I cut out from the copper cylinder, which are 0.2m in length. To calculate the current, I will have to use the formula:
I0 is the initial current that the capacitors will be discharging. This can be found by V0 ÷ R. The initial voltage I used was 14V, but since I did not place a resistor in the circuit, I will need to find out the value of the natural resistance within the circuit. This can be done by calculating the resistance in the copper wire, as that makes up the majority of the circuit. The formula for resistivity is:
ρ is the electrical resistance of a particular material and is a constant. For copper at 20℃, it is 1.72 x 10-8 Ω m-1. L is the length and A is cross sectional area of the wire. I used approximately 60cm of copper wires, and they 91
were 20 gauge, which means they have a diameter of 0.81mm. By using the formula for the area of a circle, I can work out that the cross-sectional area is 0.00515cm2. By substituting in the numbers, I get a resistance of approximately 0.02Ω. Using these values, we can find I0 to be 700. e -t/RC can be broken down in to t (the time taken for the capacitor to discharge) and RC (the time taken for the capacitor to lose approximately 37% of its charge). T is equal to the length of time taken for the capacitor to fully discharge. To find out how to calculate this, we can use the graph below (Fig 4). As shown below, it takes 5 time constants (RC) for the capacitor to fully discharge. It is generally accepted that the capacitor is fully discharged when it reaches 1% of its initial voltage. Therefore, the time taken for the capacitor to fully discharge will be 5RC. We can substitute in t = 5RC in to the equation, so we get e^((-5RC)/RC). We can cancel out the RC, leaving us with just -5. Using e-5, we can calculate current to be 4.72A. Figure 4: Discharging a capacitor To calculate B, which is measured in Tesla, I used the formula: B=μ01/2πr -7 μ0 is the permeability of free space and is a constant (4π×10 Tm/A). r (radius) is the distance from the source of the magnetic field, which are the copper rails. I will take half the width of the gap between the rails as the radius, which will be 0.005m. With these values, we get B = 4×10-5
So now I can find the force that is acting on the projectile itself by simply substituting in all the values to the original equation. This is a very small amount of force and explains why my projectile did not move at all. To put it into context,
to accelerate my 0.05kg projectile at the same speed as a Glock 19 (a common handheld firearm in the US), I would need a force of approximately 20N. I found this out by finding out the muzzle velocity (375m/s) and using the formula F = mass x acceleration to find force. The current level of force is only 0.0002% of that. The reason why this test did not work even though my model worked may be due to a significantly higher level of friction between the projectile and the rails. In the model, I used a metal stick and let it roll on top of the rails. However, for the real railgun, the projectile needed to slide through the rails, with no rolling happening. This friction force may have been too large for the small force generated by the projectile to overcome. After this, I decided to do some further calculations to see what voltage I would need to charge my capacitor bank up to see some movement. If I raised the voltage to 1500V (which would not be possible due to my capacitors only being able to be charged up to 600V and also the fact that it violates the safety measures that I outlined above), I would get:
92
Even when using 1500V, this only yields approximately 2.02N of force. That is only 10% of the force required to match the average handheld firearm . Furthermore, 505A of current is a fatal hazard with the potential to kill anyone who comes in contact with the circuit. After doing the calculations, I decided to stop any further testing as I was unlikely to collect any meaningful data. One option that I could have taken was to lower the mass of the projectile. However, since I cut the projectiles to such a small size already, it was not possible for me to make them even smaller. Also, there was not enough time to order more materials to act as a lighter projectile.
CONCLUSION AND REFLECTION Overall, I believe this project was unsuccessful. One of the limiting factors was the lack of tools to be able to fix the rails properly and to provide a connecting point for the wires. If I had access to those tools, the assembly of the product would have been easier. Secondly, if I wanted the projectile to fire projectiles at a faster speed, I would need to increase the voltage and therefore the current. To put it simply, the more successful the railgun, the more dangerous the experiment would become. Since I conducted this project in a school environment, I could not use voltages above 200V, which limited the effectiveness of the railgun. Even if I had managed to build a railgun with properly secured rails, without the proper voltages, the projectile would not be fired at a satisfactory speed. The third and final reason is due to none other than myself. If I had not left this project until the very last minute, I may have had time to procure the appropriate tools and I may have been able to conduct high voltage experiments at home instead, by purchasing a high voltage DC power supply. This way, I may have been able to move on to my secondary objective of using my railgun to fire model planes to look at the practicality of transportation using railguns in the future. However, there were some successes in this project as well. I was able to develop my independent researching skills in a topic I was unfamiliar with. Even though I do not take physics as a subject, I wanted to test my ability to undertake a project in a field where I do not have much background knowledge. Since there was not much specifics available about calculating force exerted by a railgun, I needed to weave together pieces of information from the sources that I used to formulate a series of equations that would allow me to calculate the force exerted on the projectile for my railgun. Furthermore, most of the resources used were to supplement my understanding of what a railgun is and how it works in detail, instead of the railgun artefact itself. This is because the railgun itself is very simple and it is relatively easy to understand the forces behind it. However, since this is a relatively new field of research, finding resources related to the specifics of the railgun such as how to calculate force properly or how to build one safely by yourself proved to be quite difficult. I also realised that I should do significantly more preliminary work before undertaking this experiment, as the calculations showed that the experiment was unlikely to be feasible. If I had done this research prior to ordering the parts for the railgun, I could have made alterations to the product design, or instead completed a non-experimental research paper. https://www.physics.manchester.ac.uk/research/themes/
93
BIBLIOGRAPHY Websites • Bray, Nancy. “Space Shuttle and International Space Station.” NASA, NASA, 28 Apr. 2015, www.nasa.gov/centers/kennedy/ about/information/shuttle_faq.html#10 • “Railgun Physics.” Maritime Theater, Massachusetts Institute of Technology, web.mit.edu/mouser/www/railgun/physics. html • Britannica, The Editors of Encyclopaedia. “Lorentz Force.” Encyclopædia Britannica, Encyclopædia Britannica, Inc., 8 June 2017, www.britannica.com/science/Lorentz-force • Hewes, John. “Capacitance.” Electronics Club - Capacitance - Uses, Charge, Discharge, Time Constant, Energy Stored, Series, Parallel, Capacitor Coupling, Reactance, electronicsclub.info/capacitance.htm • “Magnetic Field Formula.” Math, www.softschools.com/formulas/physics/magnetic_field_ formula/343/ • “Wire Gauge Converter - AWG versus Square Mm.” Engineering ToolBox, www.engineer ingtoolbox. com/awg-wire-gauge-d_731.html • “Resistivity and Electrical Conductivity.” Basic Electronics Tutorials, 16 Aug. 2018, www.electronics-tutorials.ws/resistor/ resistivity.html • “Glock.” Wikipedia, Wikimedia Foundation, 5 May 2019, en.wikipedia.org/wiki/Glock. Images • Figure 1: J, Sebastian. “Three Right Hand Rules of Electromagnetism.” ArborScientific, www.arborsci.com/cool/three-righthand-rules-of-electromagnetism/ • Figure 2: “Rail Accelerator.” Lasers, Technology, and Teleportation with Prof. Magnes, 26 Feb. 2014, pages.vassar.edu/ ltt/?tag=rail-accelerator • Figure 3: “Homemade Railgun Experiment.” Do It Yourself Gadgets, 4 Oct. 2013, www.doityourselfgadgets.com/2013/10/ homemade-railgun.html • Figure 4: Hewes, John. “Capacitance.” Electronics Club - Capacitance - Uses, Charge, Discharge, Time Constant, Energy Stored, Series, Parallel, Capacitor Coupling, Reactance, electronicsclub.info/capacitance.htm
94
â&#x20AC;&#x153;The work may be hard, and the discipline severe; but the interest never fails, and great is the privilege of achievement.â&#x20AC;? John William Strutt, 3rd Baron Rayleigh, Old Harrovian