Young Scientists Journal - Issue 21

Page 1

GHOSTLY OVERTONES Artificial Harmonics on the Violin

Deadly and Frequent?

c-Myc Expression

Greener ≠Cleaner

Investigating the Epidemiology of Borrelia Burgdorferi in the Dorset Area

Investigating How mRNA Capping Enzyme Regulates the c-Myc Oncogene

Possible Influence of Plant Clusters Mounted Next to Waters on the Water Quality

2018 | ISSUE 21 | WWW.YSJOURNAL.COM


The Team Young Scientists Journal is run by an international network of young scientists making us the only peer-review science journal of our kind.

Peter He

Stewart McGown

Chanon Olley

Chief Editor

Head of Production

Head of Outreach

Peter is the proverbial captain of the metaphorical YSJ ship. In his spare time he plays guitar for the post-good band Complete Amateurs. Based in London, England.

Stewart builds all our systems. Sometimes he even manages to make them work. Based in St Andrews, Scotland.

Chanon is in charge of the journal’s outreach efforts. He has a passion for the STEM subjects, especially Computer Science. Based in Kent, England.

Hamza Waseem

Wietske Holwerda

Julie Hu

Head of Editorial

Head of Marketing

Creative Director

Hamza has extensive editorial experience and is passionate about science communication and outreach. Based in Lahore, Pakistan.

Wietske is in charge of the journal’s marketing and social media accounts. Based in London, England.

Julie is the journal’s lead artist, creative director and maker of this issue’s cover art. She has worked with institutions including Tsinghua University Press and has been featured by the Russian Embassy to the USA. Based in Beijing, China.

Production Team The Production team oversees the journal’s website and internal infrastructure. Awn Umar Alex Choi Charlotte Soin Josh Ascroft Laura Patterson

Marketing Team

Outreach Team

The Marketing team is in charge of the The Outreach team manages the journal’s social media accounts and journal’s relationships with schools, public relations. coorporates and hubs around the world.

Lowena Hull Joanna Olagundoye Niamh Alexander Sarina Tong Umer the Nameless Eva Goldie Emily Wallace Katie Foreman †

Eleanor Gibbon

Alexandra Nelson Matthew Schaffel Shashwat Kansal Josh Williams Robert Gregson Deepro Choudhury Akorede kalejaiye †

Alicia Middleton Caroline Chen Frank Chen Katharina Thome Mattia Barbarossa Mhairi McCann †

Editorial Team The Editorial Team form the beating heart of the journal. They are responsible for the review and publication of submissions. They are led by Executive Editors Paul Karavaikin (physical and mathematical sciences) and Tavleen Kaur (life sciences). Abhiram Bibekar Abhishek Ghosh Alex Gao Alexander Chen Aliyah Adam † Anne-Rosa Bilal † Apurv Shah Benjamin Schwabe Bethany Lai Brendan Huo Brigitte Wear

Daniyal Ashraf Danyal Abbas Darpan Rekhi David Jay Esther Choe Habibat Olaniyi Hajra Hussain Halima Mansoor Ho Kiu Ngan Jebin Yoon Jemisha Bhalsod

Jiangmin Hou Joanna Olagundoye John Mulford Katharina Thome Katie Foreman † Katie Savva † Krishan Ajit Kyle Newman Lara Gubeljak Lulu Beatson Lydia Sebastian

Marcus Thome † Martand Bhagavatula Mobin Ibne Mokbul Neha Khimani Nicole Mitchell Nnaemeka Ede Rameen Zulfiqar Ryan Kang Saheefa Ishaq Samir Chitnavis Sankha K.Gamage †

Sonya Lebedeva Sophie Stürmer Tamanna Dasanjh Tito Adesanya Susan Chen Yee Kwan Law † Yusuf Adia

Member of the Issue 21 Print Task Force.


Editorial Not long ago, I was invited with a group of friends to a dinner down in Kingston-upon-Thames at the recently-opened-but-not-overtly-hipster gastropub The Canbury Arms. Having tucked into our starters and engrossed in conversation, we were roused by the ding-dinging of a glass – it was time for the dinner speech. The speaker was none other than the (I for one would describe as ‘legendary’) classicist (and not bassist), John Taylor, author of the disappointingly little-read but much-loved Greek to GCSE duology. Among other things, he began recounting the story of a certain thief out of Herodotus’ Historiae which I shall now share with you in brief. A certain king named Rhampsinitos realises that his wealth was so great that there was no chance that he’d be able to keep it all safe in his palace. He therefore commissions a builder to construct a secure room such that he would be able to avoid cluttering up his palace while having peace of mind that his treasures weren’t being stolen. Little does he know that the builder leaves a single loose brick in the wall of the keep. Sometime later, the builder falls ill and, on his death bed he tells his two sons about the brick. The sons find the brick and begin emptying the king’s keep. It doesn’t take long for the king to notice and become troubled at the fact that his mountains of gold were being reduced to mere hills while at the same time there was no signs of a break-in. He thus orders that traps be set up to capture the intruders. When night comes, the two brothers sneak into the keep, only for one of them to become ensnared in a trap. Wishing not to blow their cover, the ensnared brother begs that he be decapitated, a request met by the other with surprising robustness and gusto. The next morning, the king discovers a headless corpse in his keep but is unable to identify to whom it belonged. Angered, he thus displays the corpse in the market square and orders his guards to arrest anyone seen weeping nearby. At the same time, the remining brother returns home bloodied and carrying the head of his late sibling. Naturally, his mother isn’t overly impressed and orders him to bring back the rest of his brother lest she report him to the magistrates. The young man thus comes up with a plan. On a hot day, he loads some donkeys up with wine and walks them to the market square. As he passes the corpse of his brother, he kicks one of the donkeys causing the wine to spill out onto the ground. He begins cursing and the nearby guards come down to try to console him. They help the donkey up and he offers them wine in return in a sort of oh-well-we-mustn’t-let-prefectly-good-winego-to-waste fashion. The guards draw wine cups

they just happened to have on them (as they did back then) and are soon dissolved in sleep and wine. The thief uses this as an opportunity to take down the body and hightail it back home. The king is now infuriated and, in a last-ditch attempt to capture the thief, he orders that his daughter goes down to the royal brothel and make her lovers reveal to her their most evil deed. Fuelled by what were the king’s riches, the thief now lives a lavish lifestyle that involves going down to the royal brothel from time to time. It doesn’t take long for him to get things on with the princess and, in the dark, he reveals to her that he is the fabled thief. She grabs onto him and calls the guards. You might think that this is how the story ends, but it transpires that the thief had somehow anticipates this move and thus brings the severed arm of his brother to the brothel with him, allowing him to escape while the princess clutches onto the decaying body part. The king, now confounded at the sheer cunning of the thief, gives up and publicly offers him his daughter’s hand in marriage which he comes out and accepts. They live happily ever after. I’m now going to draw an incredibly tenacious link between this allegedly-true story and the operation of the journal. While some would argue that the moral of the story is that those in power are morons, I wish to spin it as a message that we ought recognise innovation and ingenuity. The journal’s undergone a great deal of change over the past few months with Dr. Dawn Leslie taking over mentorship of the journal from Christina Astin; the rolling-out of our new website and article management systems developed in-house by the Production Team; our first northern conference and (perhaps the reason why it took us so long to put together this issue) the departure of long-time member and resident print production one-man print production team, Michael Hofmann, for alas, he has within Time’s bending sickle’s compass come (he didn’t die, he turned 22). I would thus like to use this opportunity to thank everyone on the ad hoc print issue task force who contributed to this issue - through our collective gumption we’ve tackled the seemingly impossible and have this here journal to show for it. Great work! And, on that bombshell, I open this 21st issue of Young Scientists Journal. Enjoy!

Peter He

Chief Editor

chief.editor@ysjournal.com


Contents Issue 21 | February 2018

02 |

Investigating the Epidemiology of Borrelia Burgdorferi in the Dorset Area

25 |

Time is of the Essence

30 |

The Grand Unification

05 |

Investigating How mRNA Capping Enzyme Regulates the c-Myc Oncogene

35 |

Potential Uses and Benefits of Hillwalking as Cardiovascular Exercise

09 |

Possible Influence of Plant Clusters Mounted Next to Waters on the Water Quality

40 |

Testing Theories of the Origin of Language on Indonesian

44 |

The Imminence of the Nosocomial Pathogen Acinetobacter Baumannii

53 |

The Effect on Mechanical Performance of 3D Printed Polyethylene Terephthalate Glycol Structures Through Differing Infill Pattern

60 |

Suspending a Dipole Radar Scanner From a Helicopter – Improving Methods of Evaluating Glacial Water Resources

11 |

To Frack or Not to Frack, That is the Question

13 |

Detection of Heavy Metals in Honey Samples using Inductively Coupled Plasma Mass Spectrometry

16 |

Artificial Harmonics on the Violin

It’s What’s Inside That Counts... When it comes to 3D printing, there are a load of infills you can use. But which is strongest?

Push it to the Limit Walk along the razor’s edge. If said razor is at an incline, it may yield cardiovascular benefits. Page 35

Page 53

Keeping Track of the World’s Glaciers Scanning glaciers in the cold Antarctic is hard. This new approach may make things a bit easier. Page 60

Which of These Shapes is a Kiki? It seems that regardless of culture, people assign the leftmost shape ‘kiki’. Wherefore? Page 40


INVESTIGATING THE EPIDEMIOLOGY OF BORRELIA BURGDORFERI / RESEARCH

Investigating the Epidemiology of Borrelia Burgdorferi in the Dorset Area Madeleine Webster-Harris (17) surveys ticks in the Dorset Area in search for Lyme Disease.

Abstract

The aim of our project is to use analytical techniques to identify the existence of Lyme borreliosis, commonly known as Lyme Disease in Ticks and investigate the distribution of Lyme borreliosis in the Dorset area. Results are presented from successful and accurate gel electrophoresis using Safe Blue indicator. Although only one positive result for the disease was identified, the skills we learnt while completing this project were invaluable.

Introduction

Lyme borreliosis, is caused by spirochetal bacteria from the genus Borrelia. The official species is Borrelia burgdorferi. Spirochetes are surrounded by peptidoglycan and flagella, along with an outer membrane. The spiral-shaped bacteria have flexible, pliable bodies that are moved by organs called flagella. These are long lash-like appendages that protrude from the cells’ bodies. With the aid of flagella protein they rotate allowing the bacteria to move in its host environment. The bacterium is different from any other bacteria known due to its unique structure; it has a three-layer cell wall which helps determine the spiral shape of the bacteria. It also has a clear gel-like coat of glycoproteins which surround the bacteria acting as protection to prevent the bacteria from being detected by the immune system. Therefore, the disease is hard to identify in humans. Furthermore, the bacteria have replications of specific genes known as “Blebs” which are released from the bacterium into the host and can irritate the immune system. Finally, when entering a cell, the bacteria releases digestive enzymes that dissolve the cell. When targeted at the immune system and T-lymphocytes, the immune response is weakened and the disease is even harder to identify. When bitten by an infected tick the microbes travel through the bloodstream to the heart. Ticks are infected by deer and other mammals causing large numbers to accumulate in singular areas thus making the threat to

2

Identifying PTC and our results. The Lane with 2 contains belongings to a person with Sickle Cell Anaemia

WWW.YSJOURNAL.COM I ISSUE 21 I 2018

humans extremely dangerous, but just how dangerous is it to those living in the Dorset area?[1]

Gathering the Ticks

The ticks were sent in by volunteers across the Dorset area. We ended up with over 400 ticks; these were randomly selected to avoid bias into a sample of 120. The ticks were then profiled and recorded for future reference. This included their sex, size and developmental stage of the ticks. Female ticks are generally larger and have dark brown heads and upper bodies while males are smaller.[2] The ticks were then assigned numbers to identify them. We began our project by identifying the PTC allele and Sickle Cell Anaemia in anonymous volunteers to develop and perfect our skills. This was crucial as the DNA is easily contaminated and the method requires extreme precision. Without this shorter investigation, the results we obtained would not have been as reliable as they were.

The gel electrophoresis for Sickle Cell Anaemia. It is the second lane from the top. Sickle Cell Anaemia only has two bars as it recessive, and can be identified easily in this way


RESEARCH / INVESTIGATING THE EPIDEMIOLOGY OF BORRELIA BURGDORFERI

Method How Does PCR Work?

Polymerase chain reaction (PCR) is a technique used in molecular biology to amplify a single copy or a few copies of a segment of DNA across several orders of magnitude, generating thousands to millions of copies of a particular DNA sequence”. [3] Typically, PCR consists of a series of 20-40 repeated temperature changes, called cycles, with each cycling commonly consisting of two or three discrete temperature steps. [4] DNA primers are short sections of DNA that are complementary to the three prime ends of the segment of DNA you want to elongate. Through complementary base pairing one primer attaches to the top strand and the other to the bottom. DNA polymerase, a naturally occurring complex of proteins, attaches to the primer and starts to add nucleotides to it, thus creating copies of the DNA segment.[5] Tick DNA primer is the cytochrome 6 oxidase primer and the Borrelia DNA primer is the B.burgdorferi 16s rRNA primer and the 129 base product. By the 35th cycle there is 68 billion copies of the DNA segment that can be analysed by Gel electrophoresis.

has a positive charge and the other end has a negative charge. The movement of charged molecules is called migration, molecules migrate towards the opposite charge; a molecule with a negative charge will therefore be pulled towards the positive end. The gel consists of a permeable matrix through which molecules can travel when an electric current is passed across it. Smaller molecules migrate through the gel more quickly and therefore travel further than larger fragments that migrate more slowly and therefore will travel a shorter distance. As a result, the molecules are separated by size. [5] DNA is made up of molecules called nucleotides. Each nucleotide contains a phosphate group, a sugar group and a nitrogen base; adenine (A), thymine (T), guanine (G) and cytosine (C).[6] A phosphate group is a functional group or radical comprised of phosphorus attached to four oxygen atoms, and with a net negative charge, thus represented as PO4-. It is due to the negatively charged phosphate group that the DNA molecule as a whole is polar and travels from the negative end of the chamber to the positive. The DNA ladder contains bands at known lengths, using a positive Borrelia control you can see what length the Borrelia gene travels to against the ladder and thus if we have any positive results from our ticks as they will be found at the same length.

Our results show that the stigma surrounding Lyme Disease being “deadly and frequent” is inaccurate

How Does Gel Electrophoresis Work? Gel electrophoresis is a technique commonly used in laboratories to separate charged molecules like DNA according to their size. Charged molecules move through a gel when an electric current is passed across it. An electric current is applied across the gel so that one end of the gel

Results

After 2 years’ worth of analysing ticks from across the Dorset area we only found one positive result for Lyme disease. The lanes 1 and 8 are DNA ladders, and lane 2 is a positive result for Lyme disease. In Lane 5 there is one band for tick DNA and just above there is faint band at the 2018 I ISSUE 21 I WWW.YSJOURNAL.COM

3


INVESTIGATING THE EPIDEMIOLOGY OF BORRELIA BURGDORFERI / RESEARCH same distance as the positive result, meaning this tick had Lyme disease. The Borrelia gene is lighter and thus travelled further than the tick DNA and the result is also fainter. The tick was number 48 and was donated to us from the location marked on the map.

References

1. “The Complexities of Lyme Disease.” The Complexities of Lyme Disease. Accessed August 9, 2017. https://www.lymeneteurope.org/info/thecomplexities-of-lyme-disease. 2. “TickEncounter Resource Center.” Tick Encounter Resource Page. Accessed August 9, 2017. http:// www.tickencounter.org/tick_identification/dog_ tick. 3. “PCR.” Learn Genetics. Accessed August 9, 2017. http://learn.genetics.utah.edu/content/labs/pcr/. 4. W. Rychlik, W.J. Spencer, and R.E. Rhoads. “Optimization of the annealing temperature for DNA amplification in vitro;.” Nucleic Acids Research 18, no. 21 (1990): 6409-412. Accessed August 9, 2017. doi:10.1093/nar/18.21.6409. 5. “What is Gel Electrophoresis?” YourGenome.org. January 25, 2016. Accessed August 9, 2017. https://www.yourgenome.org/facts/what-is-gelelectrophoresis. 6. Rettner, Rachael. “DNA: Definition, Structure & Discovery.” LiveScience. Accessed August 9, 2017. https://www.livescience.com/37247-dna.html.

The distribution of ticks we collected The gel electrophoresis of our only positive result, the double band is in Lane 5 and aligns with the positive control in Lane 2

Conclusion

BIOGRAPHY

From our results, we had 1 out of 120 samples ticks test positive for Lyme disease. Our results were reliable due to extreme caution taken during the experiment and trustworthy equipment such as pipettes allowing us to accurately measure in micrometers. Our results show that Lyme disease is not at all common in the Dorset area and that the stigma surrounding it of being “deadly and frequent” is inaccurate as the chances of finding an infected tick is 1/120 from our research. This means that tourist and nature enthusiasts are free to walk the scenic landscape of Dorset without worrying about the threat of Lyme disease.

4

Madeleine Webster-Harris, 17, UK

Madeleine is a 17 year old A-Level student at the Thomas Hardye School. She is working towards studying molecular biology at university while her current subjects are: Biology, Chemistry, Psychology and Theatre Studies. WWW.YSJOURNAL.COM I ISSUE 21 I 2018


RESEARCH / HOW MRNA CAPPING ENZYME REGULATES THE C-MYC ONCOGENE

Investigating How mRNA Capping Enzyme Regulates the c-Myc Oncogene Maria Pisliakova(17) investigates whether the c-Myc oncogene which is overexpressed in cancers and tumors potentially holds key to their therapies

Abstract

The c-Myc gene encodes a protein which is a key player in regulation of gene expression, whose malfunction and mutation is associated with many types of cancer. Currently, no therapies targeting c-Myc are available, thus this gene is a topic of intense research. Recently it was shown that the mRNA Capping Enzyme regulates c-Myc protein expression, but the mechanism is still largely unknown. Through biochemical and molecular biology experiments, we can deepen our understanding of the c-Myc oncogene regulation and in turn, bring us a step closer towards understanding the mechanisms of cancer, and potentially lead to new therapies. This paper will outline a potential mechanism by which Capping Enzyme regulates c-Myc gene expression which was discovered through a Nuffield Research Project.

Introduction

The central dogma of biology states that genes encoded in the DNA get transcribed into mRNA transcripts, which are translated into proteins (Figure 1). Gene expression is a tightly regulated process. One of the key steps is the addition of a cap structure (mRNA Cap) to the 5’ end of the mRNA by mRNA Capping Enzymes. This cap serves as a marker for further transitional steps and prevents mRNA degradation by nucleases[1]. Figure 1: c-Myc transcription and translation

Genetic mutations can spontaneously occur in genes within the cell, which change the gene expression and can cause cells to proliferate uncontrollably. Cancer and tumour formation is the result of at least two mutations in cancer-associated genes occurring within a cell, because cells have ‘tumour-suppressor’ mechanisms for dealing with a single mutation. For example, the first mutation could activate a gene that drives cancer and the second mutation could be one that prevents a tumour-suppressor mechanism, causing this cell to become cancerous and, after several cell divisions, form a tumour. It is crucial to understand how these cancer-causing genes (such as c-Myc when malfunctioning) are regulated. Only when we thoroughly research this we can produce a range of drugs and therapies that could specifically target these mechanisms (known as targeted cancer therapies). In theory these drugs would be much more efficient and less

toxic than traditional cancer drugs or chemotherapies.

c-Myc

The c-Myc oncogene is an important gene in the human body, which controls over 15% of our genome[2]. The c-Myc gene expresses a c-Myc protein, which conducts many important cell functions such as growth, division and gene expression. In healthy cells, the c-Myc gene maintains a normal proliferation rate. However, it is thought that mutation or deregulation of the c-Myc gene occurs in 50% of all cancers, and often contributes to tumour initiation and progression. The c-Myc protein is challenging to therapeutically target in cancer patients since it plays an important regulatory role in the normal cells, so it is unfavourable to simply ‘block it’[3]. Moreover, the c-Myc protein lacks an enzymatic active site which can be readily inhibited by small-molecule drugs. Therefore, current research is focused on the number of proteins regulating the gene (e.g. Capping Enzyme and CRD-BP). These proteins may be more accessible to target and could lead to new methods of treating cancer. Targeted therapies may have an advantage over traditional chemotherapies, which cause a lot of damage to healthy cells. In a project, I investigated the various ‘stages’ of c-Myc expression such as c-Myc protein and mRNA transcription. It was shown that the mRNA capping enzyme regulates the expression of c-Myc 4), but it is currently unknown if the mechanism is conducted directly or indirectly, i.e. mediated by other proteins mRNA Capping Enzyme regulates.

Hypothesis

It has been found that when Capping Enzyme is knocked down, c-Myc protein levels are subsequently decreased. But how does this happen? These were the following hypotheses: • Capping Enzyme regulates c-Myc mRNA stability, protein stability or both. • Capping Enzyme is regulating other proteins/genes that can in turn be acting on c-Myc (eg CRD-BP). CRDBP (Coding Region Determinant – Binding Protein) is a 2018 I ISSUE 21 I WWW.YSJOURNAL.COM

5


HOW MRNA CAPPING ENZYME REGULATES THE C-MYC ONCOGENE / RESEARCH protein bound to CRD on the c-Myc mRNA at the 3’ end, and regulates c-Myc mRNA stability (by either binding to the mRNA or not). As Capping Enzyme has the potential to regulate a number of genes, CRD-BP may be one of them and thus c-Myc may be regulated this way. (See Figure 2) Figure 2: potential c-Myc regulation through CRD-BP

Methods

To investigate the mechanism by which c-Myc is regulated two experiments were performed.

process of introducing foreign genetic material into a cell. In transfections, liposomes are used (lipid bilayers that form particles around the siRNA). The lipid structure is similar to that of a cell membrane, and fuses into the cell releasing its contents. The siRNAs then interfere with the specific target sequence, and ‘block’ a gene in the mRNA. This stops the gene’s expression whose effect can be observed on other genes or components in the cell. RT-qPCR is a technique used to amplify nucleic acids such as RNA to observe their relative quantity to a control in the cell †. mRNA expression levels obtained were normalised to GAP-DH. Results shown in Figure 3 were obtained. Three technical replicates were performed and the average expression is shown. The table was further processed into Figure 3 where the relative expression of genes was shown when genes of interest were knocked-down. The error bars represent the variation between technical replicates as determined by standard error of the mean (SEM), n=3. Experiment 2 In this experiment, the effect of Capping Enzyme knockdown on c-Myc protein stability was investigated. A Capping Enzyme knockdown, alongside a control, were performed on HeLa cells using a siRNA transfection method, and the cells were incubated for 72 hours before treating them with a protease inhibitor (this stops all protein, including c-Myc, degradation in a cell) and its control for two hours. The protein concentration from each sample was measured using a Bradford Protein Assay (used to quantify protein concentrations in cells), and SDS-PAGE (method which separates proteins based on their size, and allows their relative quantification and detection) was conducted to obtain the relative concentrations of c-Myc protein and Actin, a control protein, in the cells investigated. The intensity of the bands was quantified and Figure 5 was obtained. The results were further processed into Figure 6 shown.

Experiment 1 A Capping Enzyme knockdown was performed to investigate its effect on c-Myc mRNA levels. Capping Enzyme, c-Myc and a control gene were knocked down (this means the gene will stop being expressed) in HeLa cells using siRNA transfection methods with the help of a RNAiMax lipid-based transfection reagent, and incubated for 72 hours before performing RT-qPCR and analysing genes of interest – c-Myc, Capping Enzyme, CRD-BP and GAP-DH (a house-keeping gene used as a control). siRNAs are non-coding RNAs which bind to mRNA molecules (with fully or partly complementary sequences) and reduce their stability, thus controlling the levels of these mRNAs present in the cells 5). Whereas, transfections are the Figure 3: Table of Results from Experiment 2

Example: suppose the control was 100%, and an abundance of 60% for a protein was obtained – this means that the levels of its expression decreased in the cell.

6

WWW.YSJOURNAL.COM I ISSUE 21 I 2018


RESEARCH / HOW MRNA CAPPING ENZYME REGULATES THE C-MYC ONCOGENE

Figure 4 (Top Left): The genes of interest are shown along the x-axis, and their expression is shown relative to a control – y-axis. The colours represent the siRNA knock-downs. Eg, if Capping Enzyme was knocked-down, the c-Myc gene expression decreased (compare the red bar to the blue control bar above the C-MYC Gene) Figure 5 (Bottom): Table of Results from Experiment 2 Figure 6 (Top Right): c-Myc Protein protein levels

Analysis

Experiment 1 In Capping Enzyme knockdown (red bars), it is observed that Capping Enzyme controls c-Myc expression (i.e. mRNA stability). This is seen as when Capping Enzyme is knocked down, the expression of c-Myc decreases, which confirms previous studies[1]. It is also seen that CRD-BP expression levels are decreased when Capping Enzyme is knocked down – this suggests a possible mechanism by which Capping Enzyme controls c-Myc expression. This has not been previously observed and may be a novel way in which the c-Myc oncogene is regulated. Experiment 2 When Capping Enzyme is knocked down and cells are treated with the protease inhibitor, c-Myc protein levels still decreased. From previous studies we know that when Capping Enzyme is knocked down, c-Myc protein levels decrease[4]. If Capping Enzyme was regulating c-Myc protein stability/degradation, c-Myc protein levels would have not been affected by Capping Enzyme knockdown in the presence of the inhibitor as this treatment stops all protein degradation. However, since c-Myc protein levels do decrease, this is an indication that Capping Enzyme does not regulate c-Myc protein stability/degradation. Thus another mechanism is working in place (and this could potentially be through the regulation of CRD-BP by

Capping Enzyme, affecting c-Myc mRNA stability indirectly.

Conclusion and Outlook

Through various molecular biology and biochemistry techniques, I have determined that: 1. Capping Enzyme is responsible for c-Myc expression regulation (i.e. mRNA stability) 2. Capping Enzyme does not regulate c-Myc protein stability. I have also suggested that Capping Enzyme might indirectly regulate c-Myc protein levels through the mRNA binding protein, CRD-BP. These results have contributed to our knowledge of Capping Enzyme regulation of c-Myc and its stability. Performing biological replicates of these experiments would confirm these results. They could potentially lead to novel therapies for cancer targeting. This field of work can be extended along several directions – the mechanism between CRDBP and c-Myc could be investigated, and seeing if targeting RD-BP can kill cancer cells which are driven by the c-Myc gene.

2018 I ISSUE 21 I WWW.YSJOURNAL.COM

7


HOW MRNA CAPPING ENZYME REGULATES THE C-MYC ONCOGENE / RESEARCH

Glossary • • • • • • • •

CRD – Coding Region Determinant is a region to which the CRD-BP (Coding Region Determinant – Binding Protein) binds Gene – sequence of DNA that codes for a protein Nuclease – an enzyme in cells that degrades nucleic acids Oncogene – a cancer causing gene Proliferation – cell growth and division Protease – an enzyme which breaks down protein in the cell Transcription – The process by which a complementary mRNA strand is made from the DNA in the nucleus. Translation – The process by which the mRNA gets translated into protein in the ribosomes located in the cytoplasm.

Acknowledgements

I would like to thank the Nuffield Foundation and University of Dundee for giving me an opportunity to carry out my research project. I would also like to thank my scientific supervisor Dr Victoria Cowling, my two outstanding supervisors: Olga Suska and Olivia Lombardi for giving up their time and supporting me throughout my project (and for supplying the placement with lots of laughter and Wotsit celebrations), and all the other members of the lab (Fran, Aneesa, Alison, Jo and Dhaval) for making this project inspirational and amazing!

References

BIOGRAPHY

1. Dunn, Sianadh, and Victoria H. Cowling. “Myc and mRNA capping.” Biochimica et Biophysica Acta (BBA) - Gene Regulatory Mechanisms 1849, no. 5 (2015): 501-05. doi:10.1016/j. bbagrm.2014.03.007. 2. “C-myc gene.” April 02, 2015. Accessed February 3, 2017, https://www.youtube.com/ watch?v=O3mIutVwXtM. 3. “Targeting Myc”. Cancer Research UK. June 23, 2017. https://www.cancerresearchuk.org/fundingfor-researchers/how-we-deliver-research/grandchallenge-award/challenge6. 4. Lombardi, Olivia, Dhaval Varshney, Nicola M. Phillips, and Victoria H. Cowling. “C-Myc deregulation induces mRNA capping enzyme dependency.” Oncotarget, 2016. doi:10.18632/ oncotarget.12701. 5. “Regulation after transcription.” Khan Academy. Accessed February 3, 2017. https://www. k h a n a c a d e m y. o r g / s c i e n c e / b i o l o g y / g e n e regulation/gene-regulation-in-eukaryotes/a/ regulation-after-transcription.

8

Maria Pisliakova, 18, UK

Ever since a young age, Maria was exposed to numerous scientific environments in different countries, which provided her with the inquisitive nature she shows today. She participated in the Nuffield Research Scheme in 2016, and the 2017 National Science and Engineering Finals. In her spare time, Maria enjoys reading and spending time with her sister. WWW.YSJOURNAL.COM I ISSUE 21 I 2018


RESEARCH / INFLUENCE OF PLANT CLUSTERS MOUNTED NEXT TO WATERS

Possible Influence of Plant Clusters Mounted Next to Waters on the Water Quality Katharina Thome (14) demonstrates that plants stored next to small urban waters can affect the water quality.

Abstract

The eutrophication of natural waters is a common problem. Eutrophication describes the accumulation of nutrients (especially phosphorus and nitrogen compounds) which stimulates the growth of organisms (mainly phytoplankton). This results in a lack of oxygen and ultimately in the death of living beings in the water. This study investigates the effect of stored plants next to natural waters on the eutrophication tendency. The results have shown that it is advisable to move the cut plants from the edge of the water quickly. Keywords: eutrophication, biochemical oxygen demand, BOD5 test, urban water bodies, stored plants, plant juice

Introduction

Cut plants are often left on the banks of urban water bodies. We suspect that the outward-oozing plant liquid may affect the quality of the water, which can increase the eutrophication tendency.[1][2][3][4][5][6]

Method

The plant juice running out from a pile of plants was studied with the help of the five-day biochemical oxygen demand (BOD5) test.[3] The BOD5 test determines the level of oxygen consumption within five days. The less oxygen consumed, the higher the water quality (or the less likely it is being burdened with organic substances). A high level of oxygen consumption is more likely to result in eutrophication and the death of the entire body of water. The benefit of the BOD5 test is its indirect measure for the burden of water with bio-degradable organic substances. Decomposers which break down dead animals and plants at the bottom of a body of water consume a certain degree of oxygen, hence there is a correlation between organic Figure 1: Lake of Elfrath in Krefeld

matter build up and the oxygen consumed.[3] The more the decomposers reduce organic substances, the more oxygen they will consume.[3][4]

Results and Discussion

As Figure 3 shows, the plant juice oozing out from each pile of plants (namely, piles of leaves, grass and algae) has high BOD5 values. For comparison, a BOD5 value of 10ml/g is the worst grade a body of water can be assigned.[6] From this, we conclude that the juice running out from the pile of plants contains large amounts of organic materials. During degradation, lots of oxygen is consumed by microorganisms (decomposers). Therefore, the liquid of this plant could affect a neighbouring water body negatively and increase the risk of eutrophication. Furthermore, we have found that the BOD5 values of plant juice decrease with an increasing storage duration (Figure 4). Figure 2: Pond in the SollbrĂźggenpark

2018 I ISSUE 21 I WWW.YSJOURNAL.COM

9


INFLUENCE OF PLANT CLUSTERS MOUNTED NEXT TO WATERS / RESEARCH Figure 3: BOD5 values of various clusters of plants

Figure 4: BOD5 values’ dependency on storage duration

Figure 5: Actual influence of the pile of plants on the value of BOD5 in the water

We have been dealing with the question of whether the juice of plants will have a negative influence on a body of water. For this reason, we used fresh algae clusters which were piled up at the Lake of Elfrath as well as a pile of grass next to Sollbrüggenpark pond. 5 We measured and compared the following BOD5 values (Figure 5): • • •

BOD5 values of plant juice running out from a pile of plants (blue column); BOD5 values of the water next to the pile of plants where the plant juice had run into the water (orange column); and BOD5 values of the water five metres away from the entry point where the water quality would not be affected by the juice of plants (yellow column).

As Figure 5 shows, the algal mounds at the Lake of Elfrath have seemingly no effect on the value of BOD5 in the body of water. Both of the values measured in two locations in the water (at the edge of the pile, where the plant juice runs into the water, and a few meters away from the plant-liquid entry point) are approximately equal. However, the grass pile in Sollbrüggenpark seems to have an influence on the value of BOD5 in the water: the value at the edge of the pile is higher than at the measuring point several metres away. One possible reason for this difference is the distinct nature of each water body. The Sollbrüggenpark is a small and flat pond which is neither moved nor mixed by the wind. The Lake of Elfrath is a large water body. The water of the lake is more or less constantly in motion, swirled by the wind. We probably noticed no effect in the Lake of Elfrath because our sample was constantly replenished by the fresh water currents.

Conclusions

BIOGRAPHY

Overall, we have found that a pile of vegetation right next to a water body could adversely affect its water quality and eutrophication tendency. However, a practical influence was only noted in a small pond (in Sollbrüggenpark) and not

10

in a bigger lake (the Lake of Elfrath). Even if the expected impact is relatively small, it seems advisable not to store plant remains in close proximity to water bodies. The pile of plants should either be stored several meters away from the shore until its removal, or it should be removed immediately.

References

1. Kalff, Jacob, Limnology. New Jersey: Prentice Hall, 2001. 2. Martin, James, L. Hydro-Enviromental Analysis. Boca Raton: CRC Press, 2013 3. Schönborn, Wilfried, Lehrbuch der Limnologie [Limnology]. Stuttgart: Schweizerbart, 2003. 4. Schwoerbel, Jürgen and Brendelberger. Heinz, Einführung in die Limnologie [Limnology]. Heidelberg: Spektrum, 2005 5. Thienemann, Alfred, Die begriffliche Unterscheidung zwischen See, Weiher, Teich. – Rheinische Heimatpflege, Jg. 12 (½), Düsseldorf 1940 6. Wetzel, R.G. Limnology – Lake and River Ecosystems. London: Academic Press, 2001

Katharina Thome, 14, Germany

Katharina (Kathi) is a fourteen year old girl from Germany. She is attending to the ninth form of grammar school. Kathi loves science and music. After school she would like to become a researcher for our nature and wildlife.

WWW.YSJOURNAL.COM I ISSUE 21 I 2018


REVIEW / TO FRACK OR NOT TO FRACK, THAT IS THE QUESTION

To Frack or Not to Frack, That Is The Question With the demand for energy growing, the battle of wills between environmentalists and big oil has never been more fierce. Shakespeare’s society relied on wood as fuel but as our society becomes increasingly urbanised, energy demand continues to increase; it is expected that in 2035, demand will have risen by a staggering 37%[1]. Currently, 84% of global energy needs are supplied by energy from hydrocarbons, with oil accountable for 35%. Though crude oil has proven to be a valuable energy resource (infrastructure in place, energy efficiency), the depletion of conventional reservoirs mean we must look for a new source. Nuclear energy offers an energy per unit mass that is ten times greater than that for oil, whilst alternate energies offer an exponentially lower carbon output. Unfortunately, each of these comes with its own complications too, highly radioactive waste and inefficient energy supply respectively. A popular solution to the energy crisis is to turn to natural gas as our primary source of energy. Natural gas is compatible with existing infrastructure and has a carbon output which is three times smaller than that for crude oil. Crucially, the recent discoveries of large volumes of

Shale Gas reserves across the globe offer an available source. The Mainstream media has divided public opinion on Shale Gas; whilst many know it as the golden goose by which the US will reach a sustainable energy future, it is also associated with “environmentally hazardous” fracking techniques[2]. Correcting this negative public opinion is one challenge an energy company would face. Furthermore, the dramatic drop in oil price has produced a tight financial climate where largely, funding for new projects has been withdrawn or suspended. However, many would argue that the Shale Gas fracking will be the next big thing in the energy industry. According to BP’s 2035 Energy Outlook, demand for natural gas will rapidly increase as large parts of Asia industrialise. Projections show that more than half of this demand could be met by Shale Gas production, suggesting it will hold an important position in the future energy market. Natural gas is cleaner in terms of CO2 emissions than other fossil fuels, and also has a higher Energy Return on Energy Investment (EROEI) than renewable 2018 I ISSUE 21 I WWW.YSJOURNAL.COM

11


TO FRACK OR NOT TO FRACK, THAT IS THE QUESTION / REVIEW energy so there may also be strong political backing for development of such projects[3]. As we become more dependent on natural gas as a means of energy production super-majors like BP will be able to invest new Shale Gas production technologies that address the associated environmental, economic and social concerns. Today, tight gas is produced by fracking, a process where water mixed with sand and chemicals is pumped underground at a high pressure to “fracture” the rock, allowing the release of trapped natural gas. An obvious challenge an energy company will face with fracking, is regarding the large volume of expensive potable water (2-8 million gallons per frack) used in the process; in many agricultural settings operating companies will outbid farmers for water, creating public aversion to the development of Shale Gas projects[4]. A recent BP project in Pennsylvania has, however, pioneered the use of foam fracking technologies, presenting a solution; nitrogen and water are integrated in the fracking mix (N2 content varies between 53-90% depending on well depth), resulting in a significant decrease in the volume of water required[5]. The contamination of water supplies is also a serious concern. Fracking fluids containing carcinogenic benzene and acrylamide are released into drinking water which can cause serious health issues, including cancer, infertility and birth defects. Fracking has also been linked to an increased level of radium reaching the surface; the 10-40% of recoverable water shows a three-hundred-fold increase in radiation levels[6]. The disposal of this water must be regulated in a safe, environmentally friendly manner. One solution, which has gained approval in the US, is using the water to make the cement which will be put back into the ground during the well-drilling process. The abundance of methane escaping from underground traps is another issue, contaminating tap water and making a significant contribution to the greenhouse effect, to a much greater extent than the reduced CO2 emissions from natural gas combustion do. However, investment from major oil and gas companies like BP has led to the development of new cements for well casing, and this has been extremely effective in managing this problem. These cements use organic resin technology, which increases the adhesive properties of cement and also contain compounds known as retarders which slow down the hardening process. For wells exceeding depths of 1500m, this is extremely advantageous as it allows sufficient time for the cement to fill up gaps before it solidifies, blocking off escape pathways for the trapped methane[7].

to remove fracking from the equation altogether. A combination of directional drilling and acidizing, which involves dissolving the carbonate cement in reservoir rock to re-establish natural fissures, has yielded excellent results for tight gas production in the US. The deliquefaction of tight gas reservoirs (removal of water) as also proven to be successful, though at present this solution is not one that is economically viable. So, are we to frack or not to frack? History has shown that fossil fuels have earned their place as kingpins in the energy market, because of the universal agreement that the energy generated comes at the lowest opportunity cost. New innovations like water-free fracking, which utilises a propane based gelled fluid instead, or infrared technology, which enables methane gas leaks to be detected and plugged up, offer us the means to enhance this process further still. Shale Gas fracking is the next step in ensuring that there is an efficient and secure future for global energy.

References

1. BP. “BP Energy Outlook.” Accessed January 19, 2017. http://www.bp.com/en/global/corporate/ energy-economics/energy-outlook.html. 2. Drill or Drop? “Shale gas: golden goose or expensive short-term hit?.” Accessed January 19, 2017. https://drillordrop.com/2015/02/10/shale-gasgolden-goose-or-expensive-short-term-hit/ 3. Carbon Brief. “Energy return on investment – which fuels win?” Accessed January 19, 2017. https://www.carbonbrief.org/energy-return-oninvestment-which-fuels-win 4. National Geographic Creative. “Water Use For Fracking Has Skyrocketed, USGS Data Show”. Accessed January 19, 2017. h t t p : // n e w s . n a t i o n a l g e o g r a p h i c . c o m / energy/2015/03/150325-water-use-for-frackingover-time/ 5. Shale Gas International. “Is Using Nitrogen For Water-Free Fracking The Way Forward?”. Accessed January 19, 2017. http://www.shalegas. international/2014/09/02/is-using-nitrogen-forwater-free-fracking-the-way-forward/ 6. Frack Off. “Fracking Impacts - Radioactive Contamination”. Accessed January 19, 2017. h t t p : // f r a c k - o f f . o r g . u k / f r a c k i n g - i m p a c t s / radioactive-contamination/ 7. Applied Mechanics And Materials. Fu, Jun Hui, Guang Cai Wen, Fu Jin Lin, Hai Tao Sun, Ri Fu Li, and Wen Bin Wu. “Hydraulic Fracturing Experiments at 1500 m Depth in a Deep Mine: Highlights from the kISMET Project”. Accessed January 19, 2017. 8. https://pangea.stanford.edu/ERE/db/GeoConf/ papers/SGW/2017/Oldenburg.pdf

BIOGRAPHY

In some areas, major oil and gas producers have taken an alternative approach to dealing with these challenges:

12

Danyal Abbas, 18, UK

Danyal Abbas is a student at the Manchester Grammar School. A Gold CREST Award winner and Nuffield Foundation Ambassador, he has demonstrated a passion for STEM, ranging from building beetle-bots in his garage to researching Age-related Macular Degeneration at the University of Manchester. He is working towards a technical career in the engineering industry. WWW.YSJOURNAL.COM I ISSUE 21 I 2018


RESEARCH / DETECTION OF HEAVY METALS IN HONEY SAMPLES WITH ICP-MS

Detection of Heavy Metals in Honey Samples using Inductively Coupled Plasma Mass Spectrometry

Ishana Aggarwal (18) explores the presence of trace heavy metals in local honey samples in New Delhi.

Abstract

Honey is a composite mixture of various carbohydrates, enzymes, flavonoids and organic acids. It is used for a wide variety of purposes and is a known antimicrobial agent. Heavy metals may be present in trace quantities in honey and their detection is important for the quality control of honey and it also serves as an indicator of environmental pollution. For this study, eight samples of honey were collected from New Delhi. Using inductively coupled mass spectrometry (ICP-MS), the honey samples were tested for the presence of seven elements - Copper, Zinc, Arsenic, Cadmium, Mercury, Tin, and Lead.

Introduction

According to the Codex Alimentarius, honey is defined as the “Honey is the natural sweet substance produced by honey bees from the nectar of plants or from secretions of living parts of plants or excretions of plant sucking insects on the living parts of plants, which the bees collect, transform by combining with specific substances of their own, deposit, dehydrate, store and leave in the honey comb to ripen and mature.� [1]

free from contamination. One source of contamination, examined by this paper, is the presence of heavy metals such lead, cadmium, arsenic, and mercury. Detection of heavy metals in honey serves a necessary purpose in quality control and in monitoring of bee environments. Heavy metals, if present in significant amounts, adversely impact human health and signal contamination of bee environments.

Honey is a composite mixture of carbohydrates such as fructose, glucose, sucrose, and maltose; enzymes such as invertase and amylase; vitamins; minerals; flavonoids and organic acids.[2] Due to its highly unique and complex chemical nature, honey finds a host of applications as a sweetener and antimicrobial agent. It is also used extensively for the treatment of burns, wounds, and skin ulcers.[3]

Hairy bodies of bees gather heavy metals from the atmosphere and bring them back to the hive with pollen. In addition, the heavy metals may be absorbed by the bees together with the nectar of the flowers or through water and honeydew. Polluted soil may lend heavy metals to the flower nectar as these may be absorbed by the roots of plants from contaminated soils. Consequently, the presence of heavy metals in honey can be seen as a valuable indicator for environmental contamination. [4]

To fulfill all these roles, it is important for honey to be

The use of trace elements such as copper, cadmium, lead, 2018 I ISSUE 21 I WWW.YSJOURNAL.COM

13


DETECTION OF HEAVY METALS IN HONEY SAMPLES WITH ICP-MS / RESEARCH

Table 3: Heavy Metal Concentrations (in ppm) in Analyzed Samples mercury, and zinc is prevalent in the human economy. Due to expansive industrial growth, heavy metals have become common environmental pollutants. In honey, particularly, these metals originate from external sources such as industrial smelter pollution, industrial emissions and unsuitable procedures during the different stages of honey production. In addition, the origin of metals in honey may be agrochemicals such as organic mercury, cadmiumcontaining fertilizers and arsenic-based pesticides.[5] Elements such zinc, iron, and copper are vital micronutrients for humans and are needed in adequate amounts to maintain normal growth and development. Their deficiency is linked to impairment in cognitive performance, lowered work capacity, lowered immunity to infections, and pregnancy complications. However, elements such as lead and arsenic have little to no known requirements in the human diet. In fact, exposure to these elements is toxic as these trace elements damage human metabolism and have known cytotoxic and carcinogenic effects.[6] The maximum permissible limit of heavy metals in food in India is prescribed by the Prevention of Food Adulteration (PFA) Act of 1955. Rule 57 states the limits of contaminants under the category “Foods Not Specified”, which includes honey.

Sample Collection Eight samples of honey were collected from the local markets of New Delhi. The samples were collected in airtight glass containers and stored in room temperature in a lightless place till analysis. Sample Preparation 1.0g of each sample of honey was digested in a microwave digester with 5.0ml concentrated 65% nitric acid and 1ml 30% hydrogen peroxide. The instrumental parameters and settings for the microwave digester were 120°C and 700 Psi for the first ten minutes and 130°C and 800psi for the next fifteen minutes. The sample was then filtered using a filter paper with a pore size of 0.45 microns and diluted up to 20ml with deionized water. ICP-MS An Agilent 7500ce ICP-MS with an Octopole Reaction System and standard sample introduction system (Nickel cones, glass concentric nebulizer, a quartz Peltier cooled spray chamber and quartz torch) was used for elemental analysis. The following operating conditions were used: RF Generator (W)

1550

Element

Maximum Permissible Concentration (ppm)

Plasma Gas Flow Rate (L/min)

15

Auxiliary Gas Flow Rate (L/min)

0.9

Lead

2.5

Carrier Gas Flow Rate (L/min)

1.0

Copper

30

Sample Introduction Flow Rate (L/min)

1.1

Arsenic

1.1

Nebulizer Pump (rps)

0.1

Tin

250

Zinc

50

Cadmium

1.5

Mercury

1.0

Methyl Mercury

0.2

Table 1: PFA Heavy Metal Limits 14

Method

WWW.YSJOURNAL.COM I ISSUE 21 I 2018

Table 2: Table of Operating Conditions Elemental analysis was carried out by inductively coupled plasma mass spectrometry (ICP-MS) after microwaveassisted acid digestion. All glassware used for the analysis was cleaned with 10% HNO3 solution and rinsed with ultrapure water. The concentrations of seven elements (Cu, Zn, As, Cd, Hg, Sn, Pb) were determined in honey samples. All samples were measured in triplicate by the ICP-MS.


RESEARCH / DETECTION OF HEAVY METALS IN HONEY SAMPLES WITH ICP-MS Results

See Table 3.

Discussion

In the present study, all the eight samples showed the presence of lead. Lead was found in quantities ranging from 0.532 ppm to 4.237 ppm. Two samples were contaminated with high concentrations of lead, which exceeded the maximum permissible limit of 2.50 ppm as prescribed by the PFA Rules. Lead, a highly toxic presence in foodstuff, has been linked to high blood pressure, heart disease, kidney disease, and reduced fertility. Overexposure to lead in infants can also lead to severe damage to brain development. High concentrations of lead in honey might be because of pollution from the vehicular traffic, heavy industries, and other anthropogenic activities. [7] Mercury was detected, albeit in small quantities, in six out of the eight samples tested and ranged from 0.075 ppm to 1.124 ppm. In one of the samples, the concentration of mercury was 1.124 ppm, which was beyond the permissible limit of 1.0 ppm. Excessive exposure to mercury is deleterious to human health. The brain remains the target organ for mercury, yet it can impair any organ and lead to malfunctioning of nerves, kidneys and muscles. [8] In one of the samples, the concentration of arsenic detected was 1.392 ppm. This was above the permissible limit of 1.1 ppm as set by the PFA. Since arsenic is a known carcinogenic, strict attention must be given to regulate its presence in honey. The principal causes for presence of arsenic in honey include non-ferrous metallurgy, and agrochemicals such as fertilizers and arsenic-based pesticides. Arsenic is also present in the soil, water, and air and hence may be absorbed by plants from these sources, consequently contaminating honey. The presence of arsenic beyond the permissible limits in honey samples is an indicator of micro pollution and hence, the use of arsenicbased fertilizers should be checked to limit environmental pollution.[9]

any of the samples. Cadmium and tin are released into the environment through numerous industrial processes and enter the food chain through contaminated soil and water. Hence, cadmium and tin concentrations in different places depend on many factors, explaining the variability in the concentration of tin and cadmium tested in the honey samples.

Conclusion

Eight samples of honey were collected from New Delhi and were tested for the presence of seven elements – Copper, Zinc, Arsenic, Cadmium, Mercury, Tin, and Lead – using ICP-MS. Two samples contained quantities of lead that exceeded the maximum permissible limit set by the Prevention of Food Adulteration Act of India. Arsenic was found exceeding the maximum limit in one sample and mercury in another. Copper, zinc, cadmium, and tin were present well below the limits. The presence of heavy metals in honey is an indicator of environmental pollution. Sufficient care must be taken to regulate the amount of heavy materials entering the environment as exposure to these elements can adversely affect human health

References

1. Kalff, Jacob, Limnology. New Jersey: Prentice Hall, 2001. 2. Martin, James, L. Hydro-Enviromental Analysis. Boca Raton: CRC Press, 2013 3. Schönborn, Wilfried, Lehrbuch der Limnologie [Limnology]. Stuttgart: Schweizerbart, 2003. 4. Schwoerbel, Jürgen and Brendelberger. Heinz, Einführung in die Limnologie [Limnology]. Heidelberg: Spektrum, 2005 5. Thienemann, Alfred, Die begriffliche Unterscheidung zwischen See, Weiher, Teich. – Rheinische Heimatpflege, Jg. 12 (½), Düsseldorf 1940 6. Wetzel, R.G. Limnology – Lake and River Ecosystems. London: Academic Press, 2001

Micronutrients zinc and copper were detected in all of the eight samples. Zinc and copper and essential for several functions of the body such as regulating the metabolism. Moreover, copper acts as an antioxidant whereas zinc is required for the proper functioning of the immune system. The presence of copper and zinc in honey were beneath the maximum permissible limits in all samples, indicating that there is scarce possibility of zinc or copper poisoning due to consumption of honey.

BIOGRAPHY

Cadmium was detected in seven out of eight samples and its concentration ranged from 0.022ppm to 0.481ppm. Tin was detected in all samples. Like cadmium, tin contamination did not exceed the permissible limits in

Ishana Aggarwal, 18, India

Born and brought up in New Delhi, Ishana Aggarwal currently studies science in Modern School Barakhamba Road, New Delhi as a high school senior. In addition to chemistry, her interests include literature and music.

2018 I ISSUE 21 I WWW.YSJOURNAL.COM

15


ARTIFICIAL HARMONICS ON THE VIOLIN / RESEARCH

Artificial Harmonics on the Violin Abstract

Tom Liu (17) investigates the cause and timbre of artificial harmonics on the violin.

This report presents the results of research into artificial harmonics on the violin. In the first experiment, the ratio of string lengths between fingers was measured to determine their effect on the pitch of the artificial harmonic. The results of the first experiment indicated that the fourth finger was always placed a quarter of the way up the string, removing all harmonics but the fourth and its multiples. In the second experiment, the harmonic content of the same note played with five different techniques (one was played as an artificial harmonic) was analyzed to find out why artificial harmonics have a ghostly timbre. The artificial harmonics had a harmonic content dominated by the fundamental and second harmonic, thus resulting in a purer tone which sounds less bright and rich than normal.

Introduction

The aim of these experiments was to find out why artificial harmonics sound two octaves higher than the note they are based on and to find the reasons for their unique timbre. How Violins Produce Sound Violins produce sound when the bow is drawn across the string. Rosin (crystallised pine sap) is rubbed on the bow hair to increase friction by , increasing the friction by creating a rougher surface. This is an example of dry friction, which can be further split into static friction (between 2 surfaces not moving in relation to each other) and kinetic friction (between 2 moving surfaces).[1] Typically, static friction is greater than kinetic friction as it usually takes a smaller force to keep an object moving at a constant velocity than to accelerate it.[2]

F = μR

where F is the force of friction, R is the normal contact force and µ is the coefficient of either static or kinetic friction depending on whether the 2 surfaces are moving relative to each other[1]. The application of rosin to the bow hair increases µ and also the difference between the coefficient of static and kinetic friction by making the bow hair rougher. As a result, the coefficient of static friction is much greater than that of kinetic friction. The string is pulled along with the bow during the ‘stick” phase due to this high static friction as the string and bow move in the same direction with similar speeds. This creates a wave that travels from the finger position (1⇒2 or “stick phase in Figure 1a), returning to the contact point where the string’s tension pulls it back the other way easily in the “slide” or “slip” phase due to low kinetic friction as the bow and string move in opposite directions. During this phase, the string moves opposite to the bowing direction and the wave reflects at the bridge. When it reaches the contact point once more, the string is moving in the same direction and at about the same speed as the bow. Therefore, there is static friction and this cycle of sticks and slips repeats. This is analogous to the string being plucked every time the kink in the string is moving in the same direction as the bow.[3] 16

WWW.YSJOURNAL.COM I ISSUE 21 I 2018

As well as the fundamental with 1 kink, the second, third, fourth etc. harmonics are also produced with two, three, four etc. kinks, respectively[4] (see Figure 1b). As the waves of these fundamentals travel and reflect up and down the string, they interfere with themselves and each other when they occupy the same space on the string, resulting in superposition of the various harmonics to form one complex standing wave. On a violin, this complex wave is usually saw-toothed in shape due to relative amplitude of the various harmonics (Figure 2) being about 1, ½, ⅓, ¼ etc.[4] As the “stick” phase occurs whenever the string and bow move in the same direction and the “slip” occurs when their motion occurs in opposing directions, the period/ frequency of the catch-release cycle is the same as that of the vibration of the string.

However, this oscillation of the string by itself is not sufficient to produce any audible sound: too little air is moved. About 40% of the string’s tension is directed downwards over the bridge.[7] It increases in the direction of bow movement during the “stick” and in the opposite direction during the “slip”. The force on the bridge oscillates with the same frequency and waveform of the waves in the string (Figure 3) with a frequency ranging from 196 Hz (open G string - the lowest note) up to around the mid2000s (E7, the practical limit for orchestral parts, is 2637 Hz).[5] This wave is transmitted to the violin body, whose vibrations move enough air to create audible sound.[4] Harmonics For a regular note, the harmonics (second, third, fourth etc.) are produced as well as the fundamental (see Figure 1b for catch-release cycle of second harmonic). Each consecutive harmonic has an additional node and antinode[6], and therefore an additional half-wavelength in the string.[7]


RESEARCH / ARTIFICIAL HARMONICS ON THE VIOLIN Harmonic

No. Waves in String

No. Nodes

No. Antinodes

Wavelength

1

1/2

2

1

2L

2

1

3

2

L

3

3/2

4

3

(2/3)L

4

2

5

4

(1/2)L

5

5/2

6

5

(2/5)L

6

3

7

6

(1/3)L

Wavelength = 2L/n for the nth harmonic where L is the length of the string. We can determine the frequencies of the harmonics of A5 which has a fundamental frequency of 880 Hz.

v = fλ where v is the speed the wave travels, f is the frequency and λ is the wavelength.[8] v = fλ = 880 * 2L

v is dependent only on the properties of the medium[8] (in this case the violin string) and not the properties of the wave itself so the frequency of the second harmonic can be determined as follows: f = v/λ = 880 * 2L / L = 880 * 2 or 1760 Hz

Therefore, the second harmonic has twice the frequency of the fundamental. For the nth harmonic of a note with fundamental frequency F:

f = v/λ = 2FL / (2L/n) = Fn

Therefore, the nth harmonic has n times the frequency of the fundamental. Each note played creates a series of standing waves. The fundamental has twice the amplitude of the second harmonic and thrice that of the third.[4] Therefore the sound you hear is a blend of these harmonics and the fundamental (Figure 2) which can be obtained by adding the wave equations[9] of the string due to the principle of superposition.[10] Types of Harmonics When a violinist plays a natural harmonic, they lightly touch the string ½, ⅓, ¼ etc. of the way along the string. This creates a node at that position and isolates a specific harmonic. Only harmonics with a node at that position form, and other harmonics are silenced. This is different to how an artificial harmonic is produced. The violinist stops the desired note with their first finger and touches the string a perfect fourth (the fourth note up the scale) above. Figure 5 shows how the fingers are numbered for violinists. This produces a note two octaves higher than the stopped note itself. This must be the fourth harmonic as a note two octaves higher must have 4 times

its fundamental frequency. The fourth finger artificially creates a node at its position, forcing only the standing waves of harmonics which have that point as a node to form and therefore be heard. Timbre Timbre describes characteristics of sound that allow the human ear to distinguish between sounds of equal pitch (dependent on frequency) and loudness (dependent on amplitude). Timbre is determined by harmonic content and the attack-decay-sustain-release (ADSR) envelope[11] (see Figure 6) of the sound as well as other dynamic characteristics such as vibrato.[12] For sustained tones, only the sustain portion is relevant; so harmonic content (relative intensity of the different harmonics present in the sound) is the most important contributing factor of timbre.[13] A sustained tone is a repeating continuous function and, as such, can be reproduced by Fourier analysis as a sum of sine waves with frequencies that are integer multiples of the fundamental frequency.[14] Fourier synthesis allows the graph of the sustained note to be obtained by adding the various sine waves previously obtained through Fourier analysis. The attack is the initial action that causes the instrument to produce the tone (in the case of the violin, it is the action of drawing the bow across the string). The sound rises to its maximum amplitude and decreases over time in the decay. The human ear can differentiate between different attacks and decays though the difference in timbre due to attack and decay is less noticeable during long, sustained notes as the harmonics produced during the attack are very important. Any sound shorter than 4 microseconds is perceived as an atonal click and the ear needs at least 60 microseconds to identify the timbre of a note[12]. Vibrato occurs when the violinist rolls his finger forwards and backwards on the stopped note which causes periodic changes in pitch as the length of stopped string repeatedly lengthens and shortens. At the date when this was written, there was no specific research into the timbre of artificial harmonics though there have been studies regarding how violins produce sound and the timbre of the violin in general.

Why Do Artificial Harmonics Work?

Apparatus • Violin & bow to produce artificial harmonics • 30cm ruler to measure string lengths to nearest mm • Rosin to increase coefficient of friction between bow and string • Tuner to determine the pitch of the stopped note and the harmonic. Method This Experiment aims to find out the ratio of the string length between the first and fourth fingers and between the fourth finger and the bridge for consecutive artificial 2018 I ISSUE 21 I WWW.YSJOURNAL.COM

17


ARTIFICIAL HARMONICS ON THE VIOLIN / RESEARCH

Figure 1a (Above): The slip-stick cycle for the fundamental

Figure 2 (Above): The wave of each of the harmonics (top) as well as the combined complex wave (bottom)

Figure 3 (Above): The slip-stick (catch-release) cycle in relation to the direction of bow movement

Figure 4 (Above): The standing waves of harmonics on a string 18

Figure 5 (Above): The fingers on the left hand are numbered by violinists

WWW.YSJOURNAL.COM I ISSUE 21 I 2018

Figure 1b (Above): The slip-stick cycle for the second harmonic


RESEARCH / ARTIFICIAL HARMONICS ON THE VIOLIN Figure 7 (Right): Violin strings labelled with their respective notes Figure 6 (Above): Attack-decay-sustain-release (ADSR) cycle. The attack time is how quickly the sound reaches full volume once initiated. The decay time is how quickly the sound drops to a steady sustain after reaching its full volume. The sustain level is the constant volume after decay. The release time is how quickly the sound fades when released. harmonics up on the A string (see Figure 7). 1. Mark points on the sides of the first and fourth finger so that each measurement is between the same 2 points. 2. Produce first artificial harmonic on A string and compare the pitches and frequencies of the stopped note and the harmonic itself. 3. Measure ratio of distances between marked points on fingers (D1) and between the bridge and fourth finger (D2) with the ruler 4. Repeat steps 2 & 3 for a total of 3 times and take mean values for D1 to obtain D1av and D2 to obtain D2av as well as the frequency of the notes both stopped and heard. 5. Repeat steps 2-4 for as many harmonics up string as practical To calculate the mean fraction of the way up the stopped string that the fourth finger is placed (Fav), divide (D1av + D2av) by D1av.

Fav= D1av / (D1av+ D2av)

Results are presented in Table 1. Analysis

Mean ratio = ⅛(ΣFav) ≈ 0.25

Thus the 4th finger touches the stopped string at one quarters of its remaining length. This forces a node at a position such that only the fourth, 8th, 12th etc. harmonics form. This is equivalent to the harmonic spectrum of a note with four times the frequency. Every time the frequency doubles, the note is higher by an octave and hence a note which is 2 octaves higher is heard. Uncertainties D1: Greatest absolute uncertainty = 0.5 * range = 0.5*0.2 = 0.1cm. Therefore maximum uncertainty = (0.1/5.4)*100 ≈ 1.9% D2: Greatest absolute uncertainty = 0.5 * range = 0.5*0.2 = 0.1cm. Therefore maximum uncertainty = (0.1/15.8)*100 ≈ 0.6% Fav: 1.9 + 0.6 + 0.6 = 3.1% = ± 0.00775 ≈ ± 0.01 (2 decimal places) It was difficult to determine exactly where the fingers touched the string. The markings on the ends of the sides of the first and fourth fingers helped to measure the distance between the same two points each time. This introduced a systematic error into the measurements. In order to reduce parallax error, the measurements were taken after the notes had been played. I endeavored to

Table 1: Ratio of string lengths 2018 I ISSUE 21 I WWW.YSJOURNAL.COM

19


ARTIFICIAL HARMONICS ON THE VIOLIN / RESEARCH keep my fingers still after playing and before measuring distances. I also tried to keep my fingers still and hence did not use any vibrato. A possible improvement to reduce experimental inaccuracies would be to have one person play the notes and another to simultaneously measure the distance between the fingers.

Why Do Artificial Harmonics Produce a ‘Ghostly’ Sound?

Apparatus • Violin & bow for playing notes • Visual Audio application to record played notes and perform fast Fourier transforms (FFT) • Excel to record and modify data • Teraplot application for Fourier synthesis of waveforms of the notes Method

Different Methods of Playing (See Figure 9)

• • • • •

Type I (Normal on E): stop string fully on A5 on the E string (yellow A5) Type II (Normal on A): stop string fully on A5 on the A string (blue A5) Type III (Normal on G): stop string fully on A5 on the G string (pink A5) Type IV (Natural harmonic on A): play A5 as a natural harmonic on the A string (lightly touch blue A5) Type V (Artificial Harmonic on G): play A5 as an artificial (touch fourth) harmonic on the G string (stop green A3 fully and touch lightly the green D3 with fourth finger)

Recording

1. Download and open the Visual Audio app from the Google Play store. 2. In the app, set a delay timer of 5 seconds before the app starts to record and a timing interval of 2 seconds. The app records at a data sampling rate of 44,100Hz (44,100 samples per second) using the microphone and converts it into a sound pressure level (SPL) in decibels (dB). 3. Warm up the Normal E during the delay period by playing a few initial strokes of the note then sustaining the note at about 80 dB into and past the end of the 2 second recording period. 4. Repeat step 3 five times in total for each method of playing the note (Normal E, Normal A, Normal G, Natural A and Artificial G).

Fourier Analysis Determining the amplitudes of the harmonics: 1. The Visual Audio app performs Fourier transforms on the sound waves recorded and produces graphs showing their harmonic content (see figure 8): 2. In the FFT graphs, the peaks show the decibel values of the harmonics: the peak furthest to the left is the fundamental. The decibel values of the first ten harmonics for each of the 25 graphs were entered into Excel. 3. As the average loudness of each of the 25 recorded notes was not constant, the decibel values of the harmonics were scaled as follows:

20

WWW.YSJOURNAL.COM I ISSUE 21 I 2018

4. The decibel values were converted to amplitudes for the different harmonics:

5. The amplitudes for the five repeats were averaged for each harmonic of each of the five methods of playing (to give table 2).

Fourier Synthesis In order to see how the difference in the relative heights of the harmonics and therefore the harmonic content affected the sound quality, the graphs of the complex sound waves for each of the 5 methods of playing could be obtained by adding the waves of each of the 10 harmonics in the Teraplot app. 1. Plot each of the sine graphs of the 10 harmonics with the equation shown below:

y = Asin(2πfx)

Where y is the amplitude of the sound wave at a distance x away from the source, A is the maximum amplitude, and f is the frequency. This produced graphs 1a, 2a, 3a, 4a and 5a for the 5 methods of playing. 2. Add the 10 equations to obtain the complex sound wave for each method of playing:

This produced graphs 1b, 2b, 3b, 4b and 5b, showing the complex waves for the 5 methods of playing. Results are presented in Tables 2 and 3.

Advantages of Fourier Analysis and Subsequent Synthesis Plotting the waves of the first ten harmonics individually and then adding them further reduced the effect of background noise on the shape of the complex wave. The higher harmonics (10th, 11th etc.) were also removed, however these have very small amplitudes so would not have had much of an impact on the overall waveform or harmonic content. Fourier analysis also allowed me to compare the amplitudes of the harmonics. I chose to use Teraplot as a plotting tool as it was free and could plot ten curves simultaneously. Why a Sustained Note? I chose to use sustained notes because that is the way artificial harmonics are usually employed in violin playing. Artificial harmonics are impractical to play in fast moving passages as they require lots of coordination to position both fingers correctly which requires time.


RESEARCH / ARTIFICIAL HARMONICS ON THE VIOLIN

Figure 9 (Left): Violin Fingering Positions

Figure 8 (Above): FFT Graph

Table 2: Amplitude of harmonics

Table 3: Harmonic Content

2018 I ISSUE 21 I WWW.YSJOURNAL.COM

21


ARTIFICIAL HARMONICS ON THE VIOLIN / RESEARCH

(From Top to Bottom) Graph 1a (L): Individual waves for first 10 Type I harmonics; Graph 1b (R): Type I complex wave; Graph 2a (L): Individual waves for first 10 Type II harmonics; Graph 2b (R): Type II complex wave; Graph 3a (L): Individual waves for first 10 Type III harmonics; Graph 3b (R): Type III complex wave; Graph 4a (L) Individual waves for first 10 Type IV harmonics; Graph 4b (R) Type IV complex wave. 22

WWW.YSJOURNAL.COM I ISSUE 21 I 2018


RESEARCH / ARTIFICIAL HARMONICS ON THE VIOLIN

Graph 5a (Left): Individual waves for first 10 Type V harmonics; Graph 5b (Right): Type V complex wave

Graph 6: Ideal complex wave of first 2 harmonics Graph 7 (Above and Left): Pie charts of harmonic content. Type IV & V (especially V – the artificial harmonic) are heavily dominated by the fundamental and second harmonic. Type I, II & III (especially I – played on the E string) have a greater proportion of higher harmonics. Analysis As no vibrato was used and the recordings were taken in the middle of a long smooth stroke, harmonic content had the greatest effect on the timbre. Harmonic Content

Table 3 shows the relative amplitudes of the first ten harmonics for each of the methods of playing A5. In comparison to the rest, the harmonics (Type IV and Type V) have greatly reduced amplitudes beyond the second harmonic. As such, their complex waves are smoother and more similar in shape to a graph of only the first two harmonics (Graph 6). This result can be shown by pie charts of the percentage harmonic contents from Table 3 (Graph 7).

In both of the harmonics, the first and second harmonics (red and blue sections, respectively) make up a very large (c.75%) proportion of the harmonic content of the sound. This results in a complex wave which is very similar to the complex wave only containing the first two harmonics. Therefore the timbre of the harmonics is similar to that of a note with only the first two harmonics. As the artificial harmonic has an even greater proportion of fundamental (53% vs 50%) and second harmonic (30% vs 20%) in its harmonic content, it sounds even more like a tone consisting of just the first two harmonics. Notes consisting mainly of only one or two harmonics have a very pure sound, as opposed to the brighter and richer sound of notes with a greater spread of harmonic content. This can be shown by pie charts of the harmonic content of the notes which were played normally: Type I, II & III. 2018 I ISSUE 21 I WWW.YSJOURNAL.COM

23


ARTIFICIAL HARMONICS ON THE VIOLIN / RESEARCH The normally played notes have a much higher percentage of their harmonic content distributed to the third fourth and fifth harmonics (green, yellow and purple, respectively). They also have a smaller proportion of the fundamental and second harmonics. This is perhaps most noticeable for Type I where the first and second harmonics are greatly reduced (though still prominent) and both the third and fifth harmonics are greater in proportion than the second.The result is the ‘bright’ nature of notes played on the E string as there are a greater proportion of higher harmonics. In comparison to the notes played normally, the artificial harmonics have a much purer sound and are consequently ghost-like in timbre. Uncertianties The Fourier transform application provided data correct to the nearest 0.1 dB. Therefore the uncertainty for the majority of calculations was 0.1 / 80 * 100 = 1.25%, so the percentages in the pie charts are correct to the nearest hundredth of a percent for the smallest sections to the nearest percent for the largest ones. To improve the reliability of my results, I repeated my experiment 5 times for each note and took their averages. I also removed anomalies and discounted and repeated recordings where any background noise was present or where the note was played incorrectly (for example, not sustaining for the whole two seconds). In order to reduce the impact of background noise on my results, I recorded the notes in a quiet room with the phone’s microphone positioned close to the violin on a music stand. By marking out a position on the floor, I ensured that the violin was the same distance away from the phone for each recording. Additionally, as the recording app showed the average dB values in real time, I was able to sustain notes at about 80dB. However, I only performed this experiment for my own violin with a set of Dominant string, so this experiment has no data regarding other violins and different types of strings (gut strings or metal strings, for instance).

Conclusion

BIOGRAPHY

An artificial harmonic plays a note two octaves higher than the stopped note because the fourth finger creates a node at a quarter of the length of the stopped string. Only the fourth harmonic and its multiples are produced as others are silenced. The fourth harmonic has a frequency which is 4 times greater. As notes sound an octave higher every time the frequency doubles, artificial harmonic therefore sounds two octaves higher. These artificial harmonics sound ghostly compared to notes played normally on the violin because they have a very pure harmonic content comprised

24

Tom Liu, 17, UK

of mainly the fundamental (c. 50%) and the second harmonic (30%). An extension of this project could be to analyse the harmonic content of different artificial harmonics. The results may differ across or up strings. To improve the reliability of the experiment, it could be repeated on different violins and an average taken as different violins also have slightly different timbres. Alternatively, one could perform an experiment to determine why the perfect fourth interval is always a quarter of the remaining string length by measuring how consecutive semitones up one string get closer.

References

1. Adams, Steve, and Jonathan Allday. “Friction.” In Advanced Physics 2nd Edition, by Steve Adams and Jonathan Allday, 94. Oxford: Oxford University Press, 2000. 2. Sheppard, Sheri D, and Benson H Tongue. “Statics: Analysis and Design of Systems in Equilibrium.” In Statics: Analysis and Design of Systems in Equilibrium, by Sheri D Sheppard and Benson H Tongue, p. 618. Wiley, 2005. 3. Wolfe, Joe. Bows and Strings. 2005. http://newt. phys.unsw.edu.au/jw/Bows.html (accessed May 12, 2017). 4. American Physics Society. Fiddle Physics. 20132016. http://www.physicscentral.com/explore/ action/fiddle.cfm (accessed 5 1, 2017). 5. Piston, Walter. “Orchestration.” In Orchestration, by Walter Piston, 45. 1955. 6. Adams, Steve, and Jonathan Allday. “Standing waves.” In Advanced Physics 2nd Edition, by Steve Adams and Jonathan Allday, 264-269. Oxford: Oxford University Press, 2000. 7. Zukovsky, Paul. “On Violin Harmonics.” In Perspectives of New Music. Princeston University Press, 1968. 8. Adams, Steve, and Jonathan Allday. “Waves.” In Advanced Physics 2nd Edition, by Steve Adams and Jonathan Allday, 236-237. Oxford: Oxford University Press, 2000. 9. Kneubuhl, Fritz Kurt. “Oscillations and Waves.” In OScillations and Waves, by Fritz Kurt Kneubuhl, 365. Springer, 1997. 10. Adams, Steve, and Jonathan Allday. “Superposition.” In Advanced Physics 2nd Edition, by Steve Adams and Jonathan Allday, 248-249. Oxford: Oxford University Press, 2000. 11. Editors of Encyclopaedia Britannica. “Envelope (sound).” In Encyclopedia Britannica, by Editors of Encyclopaedia Britannica. Encyclopaedia Britannica, Inc., 2016. 12. Nave, R. Timbre. 2007. http://hyperphysics.phyastr.gsu.edu/hbase/Sound/timbre.html (accessed May 2017). 13. Baek, Sangyeol. “Artificial harmonics.” 2015. 14. Katznelson, Yitzhak. An introduction to harmonic analysis. Cambridge: Cambridge University Press, 2004.

Tom will start Year 13 in the coming year at St Paul’s School, London and is currently studying Maths, Further Maths, Physics and Chemistry at A-level. He has played the violin from an early age and is about to take his performance diploma (DipABRSM).

WWW.YSJOURNAL.COM I ISSUE 21 I 2018


REVIEW / TIME IS OF THE ESSENCE

Time is of the Essence Nirali Patel (17) reviews the concept of time with respect to Physics and Philosophy looking at ideas from across the world.

Introduction

Time is of the essence. It flies, stalls, and like other things, runs out. But what exactly is it? Thinking about the existence and nature of time, one’s mind often wanders towards science fiction’s fantasy of time travel. But what exactly is the TARDIS travelling through? What precisely is it that we measure with our clocks? The answer to this most existential of questions has occupied, ravaged and been disputed by some of the greatest minds in both Physics and Philosophy. This paper attempts to explore some of the concepts posited by both physicists and philosophers, from Plato to Einstein, observing the unique similarities and differences between each of them.

Relatively Absolute Time

The Newtonian concept of time was one of absolute certainty. Absolute, true, and mathematical time, of itself, and from its own nature, flows equably without relation to anything external.[1] Time was constantly moving forward at the same rate, unaffected by our actions. Newton believed that the nature of time was independent of our existence; time has and will always exist. These views, however, with Maxwell’s observations and Einstein’s revolutionary theory of relativity, were soon to be highly challenged and later, disproved. Before Danish astronomer Ole Christensen Roemer’s discoveries, it was believed that the speed of light was instantaneous. Observing the eclipses of the moons of Jupiter in 1676, Roemer was able to conclude that light had a very high but finite speed, which we now know today to be approximately 3 x 108 m/s. This theory

was further exemplified in 1865 by British physicist James Clerk Maxwell. Maxwell deduced mathematically that both electric and magnetic forces are found in the same field, thus showing that electricity and magnetism were two inseparable entities, giving rise to the electromagnetic force that we know today. Maxwell also predicted that there were ‘wavelike disturbances’ in the electromagnetic field and upon calculation found that the speed of these waves coincided with that of visible light. Maxwell’s theory thus implied that light waves travel at a fixed speed - a concept that diverged from Newton’s theory of the fictitious absolute standard of rest. Reichenbach argues that there is no way of actually measuring the speed of light and proving that it is a constant. He states that this proof is dependent on the definition of the characteristics of simultaneity, which is in turn dependent on the speed of light, reaching to the conclusion that the proof itself is dependent on the outcome. Therefore, Reichenbach suggests that neither 2018 I ISSUE 21 I WWW.YSJOURNAL.COM

25


TIME IS OF THE ESSENCE / REVIEW Einstein nor Maxwell proved that the speed of light is constant, but assumes it is, by definition, one.

relativity all time became local time and all space became local space.[4]

Thus Newton’s theory created another dilemma — if there is no universal agreement on the standard of rest, then how can there be a universal agreement on the speed of an object? To explain this, we must use Hawking’s analogy of the ping pong ball.

In 1907, Albert Einstein famously proposed the concept of space-time. Space and time were not two distinct variables, but intrinsically linked and dependent on each other in the field of space-time, much like the forces of electricity and magnetism. The flexibility of time and space can easily be explained in the notable ‘twin paradox’. Imagine a pair of twins born at the same time, who are then separated at birth. One is kept on Earth while the other is put on a rocket to a star thirty light years away with a speed 99.9% the speed of light. For the twin on earth, sixty years pass until her sister returns, however, only three years pass on the rocket. While the sister on earth measures a distance of thirty light years between her and her sister, the sister in the rocket would measure only 1.8 light years. ‘The twins do not share the same time and they do not share the same space.’ [5]

If one carried out experiments with moving bodies on the train, all Newton’s laws would still hold. For instance, playing Ping-Pong on the train, one would find that the ball obeyed Newton’s laws just like a ball on a table by the track. So there is no way to tell whether it is the train or the Earth that is moving.[2] If you were playing ping pong on a train (moving at 100 kilometres per hour), you would expect your opponent to measure the speed of the ball at roughly 50 kilometres per hour. An observer on the platform, however, would see the ball travelling at a much higher speed, combining both the speed of the ball in the train along with the speed of the actual train itself and so would measure the speed of the ball at roughly 150 kilometres per hour. Thus the speed of the ball is relative to both the player and the observer on the platform, and from their respective perspectives they are both equally correct on the speed of the ping pong ball. Should the speed of the ball be measured relative to the platform or to the train? Verse 9 of the ‘Gola Pad’ in the Aryabhattiya, written by Aryabhatt in the fifth century, also mentions a similar ‘thought experiment’ showing that perhaps he had conceived the notions of relativity and space-time centuries before Einstein stumbled across it. Verse 9 states: As a man in a boat going forward sees a stationary object moving backward just so at Lanka a man sees the stationary asterisms moving backward (westward) in a straight line.[3] Aryabhatt is clearly describing the effects of relativity; the scenery surrounding a moving boat seems stationary from the perspective of a spectator on a river bank, but seems to be moving backwards from the perspective of the passenger. We can see that relativity is not a modern concept, but dates back thousands of years. So what are the implications of Maxwell’s theory and what has this got to do with time? Take the equation:

Speed = Distance / Time

It was originally believed that time was the constant in this equation when concerning light, that at the same time if one were to observe for example the light rays of the sun at different points on the earth, as the distance varies, the speed too must vary. Maxwell, however, proved that the speed of light is constant, and therefore it is both time along with distance that varies. Length and duration have to become flexible… [I]n Einstein’s 26

WWW.YSJOURNAL.COM I ISSUE 21 I 2018

The important thing to interpret from this thought experiment is that both twins are right according to their respective perspectives; they both have made accurate measurements. It is this that is Einstein’s fundamental principle regarding time — there is no ‘right answer’, as there is no absolute conception of ‘now’ in absolute time and space. Einstein turned the whole concept of time on its head. The implications of his theory of having no absolute frame of reference changed the way physicists looked at the world, forcing them to renounce their Newtonian beliefs. If there was no concept of ‘now’, then the actions you perform tomorrow, and the days after that, already exist. The future becomes predetermined.

The Immeasurable Container

In Book XI of his Confessions, Saint Augustine famously exclaims: What then is time? If no one asks me, I know; if I wish to explain it to one that asketh, I know not: yet I say boldly that I know. The question has puzzled generations of philosophers from Aristotle to Leibniz. It is an inexplicable concept into which the deeper we delve, the less we seem to know. Aristotle argues that time does not exist independently to the events that exist within it, that time in its essence is defined by the relativism of temporal actions. It can therefore be assumed that concepts such as the unanimous ‘freezing’ [6] of events while time passes cannot exist as time in itself occurs due to the change of events. Arguing epistemologically, since we have no record of such freezes, one must assume that they have not and cannot occur. Others, such as Plato, Newton and Leibniz, however, believe time to be ‘like an empty container into which things and events may be placed’; independent to the events that occur within it, and therefore ‘time freezes’ can occur. Platonism suggests that there are objects that exist neither in space nor time. Take for example the number 7. It is nonphysical and exists independently to space, time and our thoughts (i.e. it is not just an idea in our heads), and thus a Platonist here would argue that the number seven is an abstract object, but an object nevertheless. Numbers are


REVIEW / TIME IS OF THE ESSENCE objects that exist, but not in space or time. This further accentuates his argument that time itself is independent to events, if objects too can exist independent to time. Saint Augustine poses the question, ‘In what space then do we measure time passing?’ Focusing on the concepts of the past, present and future, he presents his case, that due to the fact that time flies with such speed from the future to the past, there is no space for the present. He goes on to say that we thus cannot measure time, for the past does not spatially exist, as the events have already occurred, and neither does the future, as the events are yet to come. As we cannot measure something that does not exist, we surely cannot measure time. Saint Augustine begins to break down the boundaries that govern what constitutes as past, present and even future. Einstein too advocated the discontinuity of the past, present and future. Due to relativity (going back to the twin paradox) one person’s future will already be another person’s past if we observe them from an objective point of view, as both twins are travelling at different speeds (one in the rocket and one on Earth). Thus there is no objective past, present or future, but they are all overlapping relative to one’s perspective. He once famously said, ‘People like us, who believe in physics, know that the distinction between past, present, and future is only a stubbornly persistent illusion.’ It can thus be argued that there is no reason for such concepts, if they have been proved to be merely figments of our imagination. Actions once completed belong to the realm of the past, but ‘I behold it in the present, because it is still in my memory’ [7]. The action therefore exists in both the past and the present. An action that is yet to occur belongs to the realm of the future, but by thinking about said action, it now also exists in the present as well as the future. Thus, Saint Augustine breaks down the borders of temporal action, seemingly agreeing with the Platonist view of time, that it is an unchanging immeasurable container in which events may occur.

The Double-Headed Arrow

Leonard Boltzmann, through his equation, S = k. log W, explained the second law of Thermodynamics using entropy. The second law of Thermodynamics suggests that in a cyclical process, the entropy will increase or remain the same. Thus entropy gives us information about the evolution of a system over time. Boltzmann suggests that things usually start with a low entropy and move towards a state of high entropy, as the probability of them achieving this state increases with time. This concept is supported by the observation of the ‘expanding universe’. Cosmologists have observed the universe expanding at an increasing rate, at around 67 kilometres per second, through which entropy can suggest that the universe originated as a highly dense and smoothly ordered system. Since entropy teaches us about the nature of a given system over time, it gives us the direction of ‘time’s arrow’. ‘If snapshots of a system at two different times shows one state which is more disordered, then it could be implied that this state came later in time.’ [8] This

theory implies that time is infinite, carrying on forever, as we ascend into higher and higher entropy. The arrow of time, however, does not incorporate for the concept of past and future. Although the equation fits both ways, in order to make sense of things and for the equations to directly link to our experiences, cosmologists have unanimously decided to make the arrow of time point towards our concept of the future. The further and further into the past we go the degree of order increases until we get to a single point, the “singularity”, speculated as the Big Bang, a dense area of space-time, the most highly ordered concept in the universe to have ever existed, from which time originated.

The Entangled Cup of Correlations

The phenomena of quantum entanglement (‘Spooky action at a distance’, as Einstein called it) gave rise to quantum mechanics and modern computer science, and Dr Seth Lloyd, Professor at MIT, believes it may just solve the aforementioned crisis of ‘time’s arrow’. The theory of quantum entanglement states that a particle, untouched, is in a ‘pure state’. When two particles interact, however, their states change, and they ‘become entangled components of a more complicated probability distribution that describes both particles together’ [9]. The state of the two particles remain forever entangled, and they could travel light years apart, but their spin would always remain correlated. It is this correlation that Dr Lloyd uses to help explain the arrow of time. Take a cup of coffee, for example. As time increases, the coffee particles and the surrounding air particles interact and their correlation increases until eventually they reach an equilibrium and the coffee cools to room temperature. Lloyd spent years observing the entanglement of particles, and their correlation in binary form – a ‘1’ for anti-clockwise spin and a ‘0’ for clockwise spin, for example – and discovered that these particles, as they interacted, began to lose their own particular identities, and became pieces in the giant quantum correlation jigsaw, where eventually their correlations hold all their information and the individual particles hold none. It was at this point that the states of the particles stopped changing and the coffee drops to room temperature. ‘Lloyd realised “The arrow of time is an arrow of increasing correlations.”’ [10] Boltzmann and the giants of 19th century thermodynamics believed the arrow of time followed the increase in entropy. Sandu Popescu and Lloyd, however, along with many other physicists, now believe that although at a closer perspective the entropy of the universe is evolving, as a whole the universe remains at an equilibrium. It is instead quantum probability that drives the arrow of time. While the laws of thermodynamics could be interpreted to show that the cup of coffee can theoretically start to warm up by itself (as there is no fixed direction of time), it is quantum entanglement that shows that the statistics of achieving this are near impossible. As things begin to correlate we know that time is moving in that direction, and as particles 2018 I ISSUE 21 I WWW.YSJOURNAL.COM

27


TIME IS OF THE ESSENCE / REVIEW can never untangle, we know that time itself cannot reverse. Philosophically speaking, quantum entanglement can explain the concept of the present, the elusive ‘now’, and explain why we only seem to remember the past and not the future. Our ability to remember only the past is another confounding manifestation of time’s arrow and can be seen as a build-up of correlation of particles. As you read something off a piece of paper, the photons become correlated with your brain as they reach your eyes and you remember it. It is from this moment, from the initial correlation, where your memory begins. As Lloyd put it, ‘The present can be defined by the process of becoming correlated with our surroundings.’

Although it helps to define the arrow of time, and give a clear view of what the present is, quantum entanglement has yet much to explain. While it gives time a direction through probability, it has no say in concepts such as the Big Bang or even the Big Crunch, or why we perceive time to be continually flowing. As Popescu puts it, despite his years of research in this field, time is ‘one of the greatest unknowns in physics’.

As you read something off a piece of paper, the photons become correlated with your brain as they reach your eyes... It is from this moment, the initial correlation, where your memory begins.

Time in the Eternal Truths

Many of the philosophies originating in India touch upon the concept of time, a factor that affects the metaphysics of all other entities. The ubiquity of the temporal dimension of human experience can be expressed in the phrase: Na so’sti pratyayo loke yatra kālo na bhāsate. There is no cognition in the world where time is not manifest. Upon diligent examination of the Vedas (known also as ‘the eternal truths’), it can be seen that Indian philosophies hold a ‘spectrum of views about time’ (39-48).[11] There are six major orthodox schools of philosophy – Nyaya, Vaisheshika, Samkhya, Yoga, Mimamsa and Vedanta – and four major heterodox schools, i.e. Jain, Buddhist, Ajivika and Charavaka. A study of the notion of time in Indian philosophy is of particular interest in association with the Nyaya-Vaisheshika schools, whose metaphysics and doctrines of creation and causality heavily rely on the theory of mahakala, or absolute time. They associate eight definitive characteristics with the concept of time. ‘For the Nyaya-Vaisesika philosophers, time is all-pervasive (vibhu). It is an eternal category of existence (nitya padartha); that is to say, it is without beginning (anadi) and without end (ananta), it is uncomposite (niramsa), does not presuppose any substratum (anasrita), it is an independent real (svatantra) and unchanging (niskriya).’[12] Here, time is described as an entity itself. Unlike Boltzmann’s view of an arrow of time forever moving forward, the NyayaVaisheshika schools perceive time to be static and above all actions that occur within it, more inclined towards the Platonist view and Einstein’s theory of space-time. 28

For the Samkhya-Yoga schools, ‘time, as a category of existence, does not figure in their list of tattva’. Time in Samkhya is implicit and indirect; one must simply assume that it exists, underlying all actions, as the matter is not explicitly addressed. The Yoga school, on the other hand, propounded that the idea of a unitary objective time either as a collection of moments or as an objective series is a subjective representation, devoid of reality.[13]

WWW.YSJOURNAL.COM I ISSUE 21 I 2018

Much like Einstein’s space-time, Yoga explains time to be a collection of moments, an infinite number of ‘nows’. The non-dualistic structure of Advaita Vedanta explains the concept of reality as ekamevadvitiyam – theone-without-a-second. It refutes the Nyaya-Vaisheshika concept of the reality of time [14], believing time itself to be an illusion. Reinforced by the negative phrase neti neti (not this, not this), Advaita Vedanta defines the ultimate reality (Brahman) to be untainted by time.

The Jain philosophical approach to time is an atomic one. This philosophy, believing firmly in the reality of time, perceives it to be in the form of time atoms (kalanu) that are not only real and objective, but like the Nyaya-Vaisheshika view, are also anadi and anant, without beginning or end. The atoms of space and the atoms of time are believed to be distinguishable, as although particles of space (i.e. matter) can coalesce to form larger particles, particles of time remain singular and cannot combine to form some sort of hybrid time. The more intriguing take on time by Indian philosophies is that of cyclical time. While Judeo-Christian views are that of linear time, Greco-Indian traditions take a more cyclical approach. As implied by the concept of a circle, a future event also becomes a past event and a past event also becomes a future event, thus destroying the whole concept of distinct factions between the past, present and future. Some may argue that this view of time makes history meaningless, that the actions that have occurred will eventually repeat themselves again and there is therefore nothing special about the deeds committed in the ‘past’. The Greek concept of cyclical time is interpreted through ‘Eniautos’ (Great Year) which occurs when the cosmos completes one cycle. The Indian approaches to this are slightly different but they too all commonly hold their roots in cosmic cycles. For example, it is stated in the Vayu Puran that a ‘world cycle’ is a day of Brahma, and the cosmic dissolution (pralay) is his night. The world cycle is then divided into smaller units ranging from manvantaras to mahayugas to yugas. There are believed to be four yugas that continuously repeat themselves. These are: Satyuga, Dwaparyuga, Tretayuga and Kaliyuga. Each world cycle is said to last millions of human years. It is said that one lav[15] of Brahma is equal to six hundred and sixty six years and eight months; 216,000 such lavs comprise one ghadi,


REVIEW / TIME IS OF THE ESSENCE and ‘thirty such ghadis make a day of Brahmā, which is the equivalent of our 4,320,000,000 years.’ [16] The Indian philosophies provide us with a different perspective on time. While western scientists such as Newton, Einstein and Boltzmann have typically viewed time to be linear, it is notable that Indian philosophies provide us with the unique perspective of cyclical time. While quite unlike what modern scientists have speculated, it should be regarded as equally plausible. The problem arises in the sheer variety of theories - while Nyaya-Vaisheshika believes time to be an ‘eternal category of existence’, Advaita Vedanta perceives it as an illusion. The Jain philosophical approach regards time to be created of time atoms while the Purans see it to be created from cosmic cycles. And this is merely the tip of the iceberg; there is still a vast array of interpretations regarding time interwoven into the fabrics of Indian philosophy. Thus a study of these philosophies, although perhaps leaving us wiser that we initially were, does not provide us with a definitive answer.

Concluding Remarks

It appears that there are a wide variety of theories on the concept of time, each with equal gravitas. Before Einstein, Newton’s theory of absolute time was the most widely accepted, but now space-time seems to have taken the limelight. Both Boltzmann and Lloyd, backed with their own equations and proofs, seek to understand time’s arrow, one through entropy and the other through entanglement. While St Augustine and Plato assume time to be linear, the Purans take a unique approach and declare it to be cyclical. The enigma of time has perplexed the greatest physicists and philosophers for centuries and each in their own right have their own distinct (and sometimes contradictory) views. The field of physics is moving in a new direction with theoretical physicists meticulously attempting to find the glorious ‘unified theory of everything’, the theory that will hold the key to the universe and explain so many of its enigmas. Many speculate that the key lies in the theory of time, and that once that is solved, everything else will fall into place. But where does one begin? As we can see from this paper, the sheer variety of theories suggests that the possibilities are limitless. There are so many perceptions of time that despite advances in both the fields of physics and philosophy, the theories keep expanding. Which one holds the truth?

References and Footnotes

York: Bantam Books, 1988: 20. 3. Clark, Walter Eugene. Aryabhatiya of Aryabhata. New York: University of Chicago Press, 1930: 6466. 4. Frank, Adam. About time: from sun dials to quantum clocks, how the cosmos shapes our lives - and how we shape the cosmos. London: Oneworld, 2012: 134. 5. Ibid. 6. Here ‘time freezes’ or ‘freezing’ of events refers to the situation where all events are completely frozen while time continues to move forward. 7. Augustine. The Confessions of Saint Augustine. Translated by E. B. Pusey. Project Gutenberg, 2002. 8. “Second Law of Thermodynamics.” HyperPhysics. Accessed June 28, 2017. http://hyperphysics.phyastr.gsu.edu/hbase/thermo/seclaw.html. 9. Wolchover, Natalie. Time’s Arrow Traced to Quantum Source. Quanta Magazine. April 16, 2014. 10. Ibid. 11. Balslev, Anindita Niyogi. Indologica Taurinensia 12: An Over-all View of the Problem of Time in Indian Philosophy. 1984: 39-48. 12. Ibid. 13. Ibid. 14. For example in (Benares, 1974 Cituksha’s Tattvapradipika) and (Chowkhamba Sanskrit Series, 1970) Sri Harsa’s Khandanakhandakhadya 15. Approximately 1/150th of a second. 16. Khagol, Bhugol. June 2010.

While western scientists have typically viewed time to be linear, it is notable that Indian philosophies provide us with a unique perspective of cyclical time.

BIOGRAPHY

1. Newton, Isaac. Philosophiæ Naturalis Principia Mathematica. Translated by Andrew Motte. Edited by Florian Cajori. Berkeley, CA: University of California Press, 1934: 6-12. 2. Hawking, Stephen. A Brief History of Time. New

Nirali Patel, 17, UK

Nirali is a keen physicist and aspires to study Physics and Philosophy at university, and hopefully obtain a PhD in the field too! She has just finished her AS Levels in Physics, Maths, Further Maths and Classical Civilisation. She is particularly passionate about Theoretical Physics and cannot wait to see which theories will unlock the future. 2018 I ISSUE 21 I WWW.YSJOURNAL.COM

29


THE GRAND UNIFICATION / REVIEW

The Grand Unification Adam Coxson (19) explores whether the four fundamental forces that govern our universe can be unified under a single theory.

Introduction

The first universal law was developed in the late 1600s. Newton’s law of gravitation published in 1687 states that every object in the universe attracts every other object with a force proportional to their point masses and the square of their distance apart. This made a great impression upon the intellectuals of the era as it showed that for the first time, one feature of the entire universe could be accurately modelled mathematically. In the 1900s the theory of gravity was expanded to an even greater universal scale, when Einstein established his theory of General Relativity. Since then, other theories have been developed for the fundamental forces of nature. This includes theories for electricity and magnetism that were formulated over the 17th and 18th centuries and unified in the latter half of the period by James Clerk Maxwell.

Simulated Large Hadron Collider CMS particle detector data depicting a Higgs boson produced by colliding protons decaying into hadrons and electrons. [Source: Wikipedia]

A few decades ago, electromagnetism was discovered to be unified with the weak nuclear force at such high energy levels that it could only be achieved in particle accelerators. It is believed that the strong nuclear force can also be united with the electroweak theory, provided experiments on earth can reach the extreme energy levels necessary. This combination of the strong nuclear, weak nuclear and electromagnetic forces is known as the grand unification. Some theories of everything go even further to bring gravity in as well. All these theories and their subsequent unifications indicate that perhaps there is a model that intricately describes and explains the whole universe with only a few mathematical laws[1]. Finding a complete form of this model has been, and still is, the driving force for many physicists, theoretical and experimental alike; the huge infrastructure and technology present at the CERN 30

WWW.YSJOURNAL.COM I ISSUE 21 I 2018

laboratory near Geneva is a testament to the significance of achieving this goal. From historic experiments and their evidence, it is feasible to come to the conclusion that there will be future discoveries. These discoveries will bring greater unification and ultimately lead to a theory of everything. How did the above scientific milestones develop? Will the theories of modern physics eventually lead to a grand unification and the even greater theory of everything?[2]

Electricity and Magnetism: One and the Same

Since Greek times there has been an awareness of two main materials that exhibit electrical or magnetic properties. Amber, a resin material from a type of tree, was known for its ability to attract an assortment of light objects such as fur or straw placed close to the amber. The other material was lodestone. This is a natural mineral which is a piece of magnetised iron ore which is also called magnetite, and was well known for its power to attract iron to it. Other than this, there was not any significant advancement in the study of these materials and their effects until the middle of the 1500s. A number of natural philosophers (which are the equivalent to modern day scientists) started to document the interactions of the materials. The first ideas of conductors, insulators and charge were born from this period of experimentation. However, electricity and magnetism were still two very different fields of study and there was no indication of them being linked.[3] Our modern day theories of electromagnetism were developed over the late 1700s to the late 1800s. However, it wasn’t really until the end of the 1700s that the study of electricity and magnetism entered a new age. This was due to the invention of devices that could produce continuous electrical currents. Hans Christian Orsted, a professor in Copenhagen, was the first to realise there was a connection between electricity and magnetism, while preparing to give a university lecture. He noticed that a battery he was using affected a close-by compass needle; it was being repelled by the battery. Through this he had shown that if a compass needle was placed next to and aligned parallel to a wire, when an electric current was running through the wire, the compass would align itself to be at a right angle to that wire. So evidently, an electric current flowing through a wire induces a magnetic field at right angles to the direction of the current in the wire. An electrical current is in most cases a flow of electrons, so that means moving electrical charge produces the effect of magnetism[4]. After further experiments, a clear relationship emerged: the strength


REVIEW / THE GRAND UNIFICATION of the magnetic field decreases inversely as the distance from the wire increased. Hence, doubling the distance would half the field strength and so forth. From the work of professors like Orsted, people realised that moving electrical charges could induce magnetism. Some also wondered about the opposite of this, whether a moving magnet could cause an electrical current. This group included British scientist Michael Faraday and in 1831 he made the discovery, and subsequently demonstrated, that a changing magnetic field does indeed produce a flow of electric current in conductors. A magnet moving through a coil of wire causes the magnet’s field lines to be cut by the wire at right angles, over time this produces an electric field which in turn causes a current in the wire. What is most impressive about Faraday is that he didn’t have a formal education in science. He grew up from a poor background. He taught himself science from books read from the shop he worked in and later became a laboratory assistant for the chemist Sir Humphrey Davy. As such, Faraday lacked any comprehension of advanced mathematics and continued to struggle with it later in life, one of his most striking attributes. This should have made it difficult for him to come up with a theoretical idea of the electromagnetic phenomena he observed during his experiments. Regardless, he did, which is where the idea of force fields and field lines stem from. Faraday imagined thin tubes occupying the space between magnets and electrical charges that did the pushing and pulling observed in induction.

Iron filings are magnetic and are forced to lie along the magnetic field lines emanating from the magnet. [Source: Physics Stack Exchange]

A typical demonstration of these ‘tubes’ involves a magnet and iron filings. Provided there is help to overcome friction, the iron filings can then move due to their magnetism to align themselves along the magnetic force field lines. In addition, years later in 1845, Faraday showed that light is also associated with electricity and magnetism. By utilising intense magnetic fields they were shown to affect polarised light, thus hinting at the fact that light is an electromagnetic wave. This was the beginning of the unification of electromagnetism. It was far from being refined however, with 11 different theories existing for it, each one of them flawed in some way. Then throughout the 1860s, a Scottish physicist James Clerk Maxwell formulated

Faraday’s thinking of field lines into a mathematical model. It accurately answered the relationship between electricity, magnetism and light that had previously mystified other physicists. Subsequently both electric and magnetic forces were defined by a single set of equations inferring them to be different forms of the same thing, the electromagnetic field. Thus Maxwell had unified magnetism and electricity into one force. Even more significant is that he explained how magnetic fields could travel through space in the form of an electromagnetic wave. The speed of that said wave was governed by a value that came from his equations. To the surprise of Maxwell, the speed of the wave he calculated was equivalent to the speed of light. The speed of light was known to an experimental degree of accuracy of 1% at the time. This implied that that light itself was in fact an electromagnetic wave. The importance of Faraday’s and Maxwell’s work is often taken for granted in the normal every day to day bustle. Maxwell’s equations are responsible for the workings of things from everyday household appliances to computers. They also describe all the different wavelengths of light on the EM spectrum[5].

The Electroweak Force: Unification at Higher Energies

After its successful unification, electromagnetism became known as a single fundamental force. This left a total of four of the most fundamental forces that govern the entire universe: the electromagnetic, gravitational, strong nuclear and weak nuclear. The search for even greater unification, by identifying similarities between these four forces, continued. It was in the 1960s when a promising chance to unify the electromagnetic and weak nuclear forces emerged. The weak force is responsible for particle interactions where particles change from one form to another through processes such as decay: that is, when high energy particles decay into multiple lower energy ones. The weak force acts over an extremely small range of 10-18 metres which is the same as 0.1% the diameter of a proton. Its relative strength is also a tiny value, being weaker than the strong force by a factor of 10-6 (compared with the strength of electromagnetism which is 1/137 times that of the strong force). At the energy levels of particles interactions we would experience on earth, the differences between electromagnetism and the weak force are very. The weak force can change particles into others such as when a neutron decays to a proton, electron and antineutrino, whereas the electromagnetic can’t. Furthermore the weak force has a very short range, as mentioned above, whereas the electromagnetic force acts over an infinite range, albeit dying off to negligible values at large distances. In the 1960s theoretical physicists devised that the weak and electromagnetic forces were actually two parts of a more complete theory. The two main theorists were an American named Steven Weinberg and a Pakistani called Abdus Salam, both came up with the idea independently of each other. It was also suggested that the strength of the weak force would increase as the energy of particles involved in the interaction increased. At energy levels approximately 100 times that of a proton’s rest energy, the weak force becomes similar to the electromagnetic 2018 I ISSUE 21 I WWW.YSJOURNAL.COM

31


THE GRAND UNIFICATION / REVIEW one. This energy is around 10,000 times bigger than the energy that is seen in beta decay interactions. The reason for the dependence on energy levels for the electro weak unification is due to the weak force being short range. Both the electromagnetic force and gravity act over infinite range, they do decrease in strength but only by following an inverse square law. The strong and weak nuclear forces are short range and consequently they have a radically sharp decrease (an exponential decrease) once outside of their respective range. Plans were put in place to test the predictions of the electroweak theories that had surfaced and it wasn’t until the 1980s that energies high enough to do so were achieved. By using the Super Proton Synchrotron laboratories at CERN a high energy beam of antiprotons were collided with a beam of protons travelling in the opposite direction. This mutual annihilation of the protons and antiprotons caused such high amounts of energy that new particles were produced for the first time: the W and Z bosons[6]. These particles were direct evidence that supported the electroweak theory that had previously predicted them. The W and Z bosons are the force carrier particles of the weak nuclear force and have masses of 86 and 97 times of that of a proton, respectively. For some of the lighter particle interactions (such as the decay of a muon to an electron), a carrier particle of such high mass as the W or Z simply isn’t feasible. That would mean that mass has to be produced from out of nowhere since there wasn’t enough to start with in the incident particle. Counterintuitively, this is possible, according to quantum mechanics and Heisenberg’s uncertainty principle. It is said that this extra mass can exist as long as it is only present for an extremely short time. This means that the bosons only have a short time to travel before decaying which explains why the weak force has short range. It turns out, the strength of the weak force is essentially as strong as the electromagnetic force but it appears ‘weak’ because it has such a short range. However, this obstacle of limited distance is overcome when you reach high enough energy levels. Subsequently, the electromagnetic and weak forces are unified under these conditions as two different aspects of the same thing. [7]

A Grand Unification and Theory of Everything: Is it Possible?

As with the electroweak unification, it is thought that even higher energies may enable the strong force to be added into the theory. As of yet, this is not confirmed but more is being understood as technology improves allowing particle accelerators to reach higher and higher energy levels. In fact, it was recently discovered that for a very short amount of time, quarks can exist in a penta-quark structure. Quarks are the fundamental particles that make up baryons (which are made up of a three quark or three antiquark structure e.g. protons and neutrons) and mesons (made from a quark and an antiquark e.g. kaons and pions). The strong force is the interaction responsible for this short range but high strength binding of quarks. It was previously believed that it was not possible for quarks to be isolated into a single lone quark or other forms; they were confined to their trios in baryons, or doublets that make up the mesons. 32

WWW.YSJOURNAL.COM I ISSUE 21 I 2018

The reason for this is that tremendous amounts of energy need to be put into a proton to separate its quarks. However, there is so much energy present, it is sufficient for pair production of a quark and antiquark to occur according to Einstein’s mass–energy equivalence. This simply means that energy can be converted into mass, hence why very high energy collisions can produce many particles. Consequently quarks are unable to be isolated because the high energy being used simply becomes new particles and thus does not contribute towards splitting the quark trios. Over the next few decades many more discoveries are likely to be made. Perhaps accelerators will soon operate at the required energy levels to provide evidence to unify the strong force with the electromagnetic and weak forces. The incorporation of the strong force with electroweak theory is the Grand Unification. This theory would be able to describe nearly everything that happens on the very small quantum scale. That would then leave it up to the theories of everything with the much more challenging force to integrate: Gravity.

[Source: ParticleAdventure.org]

Throughout the 1900s physicists experimented and theorised to create many of the great theories that we have now. These can all be categorised into two main areas. The first is Einstein’s theories of general and special relativity. They describe all of the massive objects in our universe such as stars, galaxies, and the force of gravity which is the influence behind nearly all of the cosmos. Then at the subatomic scale is quantum mechanics which applies to atoms, their constituents and all the forces that are responsible for the interactions between them. These two theories are incompatible; you can’t use quantum physics to explain certain processes in the cosmos such as some of the mysteries that surround black holes. Likewise, Einstein’s relativity and gravity does not conform to the rules of quantum mechanics that govern particles. In the quantum realm, gravity is so weak and negligible when acting on particles that it isn’t even considered. A theory that attempts to resolve Einstein’s theory of gravity and the grand unification of the other three fundamental forces is what is known as M-Theory[8]. This M-theory is rather ambitious and is not


REVIEW / THE GRAND UNIFICATION

Simple diagram portraying the four fundamental forces [Source: UCL]

a single theory. It is an umbrella term for many different theories that use particular principles and ideas. The three main ideas utilised by M-theory are extra dimensions, supersymmetry (SUSY) and superstrings or membranes. Extra dimensions were introduced in the 1920s when a German physicist Theodor Kaluza merged Einstein’s gravity with Maxwell’s electromagnetism. Kaluza revised Einstein’s gravity into five dimensions to give the gravitational field more components that could be interpreted as the electromagnetic field. Surprisingly these were shown to exactly match Maxwell’s equations provided you can accept a fifth dimension. This is somewhat confusing as it is evident there are only 4 dimensions: three for space and one for time. In 1926 a Swede called Oskar Klein came up with the solution by supposing that the fifth dimension is so small that it cannot be seen. A good analogy of this is to consider an ant on a piece of straight string. The ant is able to walk up and down the string and is also able to go around the circumference of the string onto the underside and back to the top again. A human observer standing above, looking down they would only see the string as a straight one dimensional line, unaware of the second dimension the ant can traverse. So Klein stated that the fifth dimension is much like this hidden one in the analogy. From this the Kaluza-Klein theory was made but it wasn’t considered significant during its time and was only rejuvenated many years later by the arrival of Super Symmetry (SUSY). SUSY is a take on the current standard model that predicts a whole new set of particles. The particles that make up matter are the hadrons and leptons which are particles that consist of quarks and fundamental particles like electrons or muons respectively. The particles that transmit force are known as carrier particles which are the bosons such as the W, Z bosons and virtual photons. These particle types are very different from each other so it was astonishing to theorists in the 1970s when it was shown that it was possible to form equations that were unchanged when you swapped the particle types. That is, in SUSY, hadrons and leptons are the force carriers whereas bosons become the matter particles. This implies there is a new type of symmetry in nature hence the name supersymmetry. One Supersymmetry predicts that every particle has a supersymmetric partner particle. These particles are thought to be present only at incredibly

high energy levels and as a result, particle accelerators still haven’t found any direct evidence of them existing yet.[9] Despite this, SUSY is still very widely implemented in M-theory. This is because its mathematics gives a relation between quantum particles and space time thus enabling gravity to be associated with it too. This amalgamation of SUSY and gravity is known as Supergravity. This too has its own problems, namely to do with the eleven dimensions of space-time that it predicted. So a contrasting approach was turned to for more definite answers called Superstring theory. In this, the fundamental particles of matter are not considered point masses. Instead they are to be thought of as onedimensional strings that exist in not eleven but ten spacetime dimensions. Just like an instrument’s string or a wave, they can oscillate at various modes and frequencies respective of the particle they represent. To begin with, it was extremely promising because it avoided the problems of eleven dimensional supergravity. The six extra dimensions could be curled up into hidden dimensions in the same way Klein suggested in 1926. But like the other components of M-theory, theorists had their doubts as superstring theory presented five different mathematical approaches that all worked equally well and thus all competed to be the accepted version. The current M-theory is a mixture of all the effort put into the supergravity and superstring theories. It was formed in 1995 by Edward Witten, a string theorist from the Institute for advanced study in Princeton, USA. He says that the M stands for membrane. Witten showed that the theories involved weren’t competing but were in fact different features of M-theory. Thus the five different string theories and eleven dimensional supergravity were put under one distinct theory. It is important to remember that the mathematics of a single theory under the M-theory term cannot explain all of the universal laws. It only succeeds in a particular area. If two theories can be used to explain the same phenomena at a certain point then the theories overlap. If the ranges of the theories overlap then they can be said to agree and hence be parts of the same theory. This overlapping is essentially what M-theory is. It is the network of the many theories of everything, connected where certain parts of the individual theories correspond. [10]

Conclusion

The Grand Unification of the strong force with the electroweak is the next step to lead onto the more complex theories of everything, M-theory in particular. They ultimately aim to identify the set of fundamental laws that governs the whole universe. These laws must be able to explain every possible process from those on the microscopic quantum scale to the largest of black holes. This has not yet been achieved and there is speculation into whether an all unifying theory could actually exist. From the evidence, it gives the impression that this goal is indeed possible. Many separate theories have been unified in the past. Maxwell’s electromagnetism was the first significant advancement and only a century later the electroweak theory was proposed and proven with the evidence from particle accelerators. The track record illustrates that as technology improves and more 2018 I ISSUE 21 I WWW.YSJOURNAL.COM

33


THE GRAND UNIFICATION / REVIEW discoveries are made, science will come closer and closer to a unified theory. This may be from refining the current theories or by having new revolutionising ideas that negate any of the flaws in prior ones. On the contrary, it may be impossible. No matter how mathematically elegant and precise to the reality that a theory will be, it remains a theory; a model that we perceive with human senses and process with human brain logic. Everything around us is interpreted by our brain that then builds our model of reality. What this point is suggesting is that any models there are to describe the entire universe will only ever be near perfect approximations at best.[11] Similarly, there is a limit to what can be measured by technology, no matter how highly advanced it can become. Technology requires humans to manufacture it. Then it needs human operators who provide the means of inputted data and review the outputted results in one form or another. M-theory’s purpose is to eventually provide that model

[Source: Symmetry Magazine]

or theory to define the universal laws by considering all the relevant theories and observational evidence from the past, present and future; this makes M-theory the main candidate for producing the theory of everything. The progress and relative success of M-theory has resulted in a dilemma. Many years ago, Newton showed that mathematical equations could give surprisingly precise predictions for the interactions between objects. From this, many scientists supposed that with enough mathematical processing, the entire past, present and future of the universe could be determined with the one ultimate theory. Then the hurdles of quantum chromodynamics, quantum uncertainty, curved spacetime, tiny vibrating strings and extra dimensions announced themselves.

BIOGRAPHY

The effort of generations of physicists has the outcome of 1 x 10500 different universes. Each one with its own set of laws where only one of these represents our actual universe.[12] [13] [14] [15]

34

Adam Coxson, 19, UK

Perhaps a consequence of this is that the idea of finding a single theory to explain everything has to be abandoned. If not, then the problem still remains. M-theory allows for many different possibilities, so how can it be narrowed down to the model that best suits our universe?

References

All online resources last accessed: 14/01/2017 1. Hawking, Stephen, and Mlodinow, Leonard, The Grand Design, Bantam Press, 2010, page 111 2. CERN, Unified Forces, (https://home.cern/about/ physics/unified-forces) 3. Hall, Graham, Maxwell’s electromagnetic theory and special relativity, (Royal Society 2008) , Section 2. Early days (http://rsta.royalsocietypublishing. org/content/366/1871/1849) 4. Taylor, John Clayton. Hidden unity in natures laws. Cambridge: Cambridge University Press, 2001, page 80 5. Hawking, Stephen, and Mlodinow, Leonard, The Grand Design, Bantam Press, 2010, page 114-117 6. CERN, The Z Boson, (https://home.cern/about/ physics/z-boson) 7. Taylor, John Clayton. Hidden unity in natures laws. Cambridge: Cambridge University Press, 2001, page 339-346 8. Duff, Michael. “Theory of everything: The big questions in physics.” New Scientist, June 1, 2011. 9. Duff, Michael. “Theory of everything: The road to unification.” New Scientist, June 1, 2011. 10. Duff, Michael. “Theory of everything: Have we now got one?” New Scientist, June 1, 2011. 11. Hawking, Stephen, and Mlodinow, Leonard, The Grand Design, Bantam Press, 2010, page 57-63 12. Hawking, Stephen, and Mlodinow, Leonard, The Grand Design, Bantam Press, 2010, page 159-142 13. Close, Frank. Particle physics: a very short introduction. Oxford: Oxford University Press, 2004. 14. Maxwell, James Clerk. “A dynamical theory of the electromagnetic field.” 1865. doi:10.5479/ sil.423156.39088007130693. 15. Francis, Matthew R. “A GUT feeling about physics .” Symmetry Magazine, April 28, 2016.

Image Sources

Wikipedia (https://en.wikipedia.org/wiki/File:CMS Higgs event.jpg) Physics Stack Exachange (http://physics.stackexchange. com/questions/41025/why-does-a-magnetic-fieldgenerate-clearly-visible-separation) UCL (https://www.hep.ucl.ac.uk/undergradprojects/3rdyear/EWuni/webpage/middleframe/ electroweakunification.htm) Symmetry Magazine (http://www.symmetrymagazine. org/article/a-gut-feeling-about-physics)

Adam is an A-level student from Birmingham, UK. He has always had a passion for science, specifically physics. Subsequently, his future aspirations involve a physics degree at university after which he’ll continue on to a career in particle physics and cosmology.

WWW.YSJOURNAL.COM I ISSUE 21 I 2018


RESEARCH / HILLWALKING AS CARDIOVASCULAR EXERCISE

Potential Uses and Benefits of Hillwalking as Cardiovascular Exercise

Andrew Wang (17) establishes a causal link between slope gradient and heart rate during hillwalking and explores its uses in fitness trackers.

Abstract

The hills are often frequented by all athletes alike, as it is believed that walking up a steeper gradient makes your heart beat faster – a 24% gradient can increase your heart rate by 55% at a modest speed compared with walking on flat terrain.[1] The relationship makes sense because heart rate is a good measure of how hard your body is working during exercise.[2] This article presents the results of an experiment that seeks to verify and quantify the effects of different factors involved in this relationship. In the experiment, the heart rates of hillwalkers were tested in varying conditions and it was concluded with near certainty that this cause-and-effect relationship exists, leading to a whole range of possible applications. The link could, for example, be exploited to solve the problem whereby mountain/trail athletes (e.g. runners, cyclists and hikers) want to know their heart rate with a higher level of accuracy and fidelity than can be achieved at present with other means. Such useful bodily statistics can be thus deduced by simply measuring the gradient of a hill, easily achievable with GPS technology.

Introduction

An increased heart rate points to aerobic exercise and countless studies have linked this to good health (e.g. lower risk of coronary heart disease or stroke).[3] For example, regular moderate intensity aerobic exercise can result in weight loss.[4] Good cardiovascular health also points to these benefits.[5] Heart rate is therefore something that one often wants to measure. For example, during hillwalking, an athlete may want to know accurately the number of calories burnt or whether they’ve reached their target heart rate, where the heart works best. Current smartphone tracker applications, which essentially guess using generalised algorithms[6], are inaccurate and physical heart rate monitors are cumbersome. Instead of looking at technological ways of measuring this data, other potential factors may be

considered– notably, the gradient, or slope of the path or road. The research hypothesis was therefore that at any point during a journey (here hillwalking is considered), the body’s heart rate is positively affected by the gradient of the slope of the path at the point of travel – that is, there is a direct causal link. If the hypothesis is to be accepted, one can quantitatively measure the health benefits of hillwalking. Exploiting the direct link between heart rate and gradient can help a number of future developments. For example, a bioengineering-inspired approach can make use of the findings to create better algorithms in mobile fitness trackers to more accurately show users relevant information about their bodies, such as calories burnt or instantaneous heart rate. In addition, civil engineers and contractors could 2018 I ISSUE 21 I WWW.YSJOURNAL.COM

35


HILLWALKING AS CARDIOVASCULAR EXERCISE / RESEARCH determine the gradients of a particular exercise site to suit the desired heart rates of different people walking up and down it in order to optimise their health benefits. A future treadmill manufacturer could tailor its program to create the most effective workout for any user by adjusting its gradient so as to reach the so-called target heart rate, where the heart works optimally without straining.

Literature Review

It is a widely stated online fact that a steeper gradient causes a higher heart rate,[7] although such literature lack causal analysis. In addition, papers on the topic have seemed to focus on those whose bodies would be the least fit, including old or overweight people,[8] as they would be at the most risk to diseases/conditions such as coronary heart disease or diabetes; these research studies may not be relevant to most people.[9] Furthermore, these investigations tend to focus on biological effects on the heart rather than their implications, which are arguably more important for achieving the functionality of such studies. This study carries out a causal analysis and proposes methods of exploiting the results to practically maximise relevant health benefits.

Method

This research was done on a group of six relatively physically fit late teenagers, a period where the body generally approaches peak performance,[10] and where other environmental/hormonal factors such as puberty, pregnancy or smoking are less of an issue. This also makes the research relevant to the readers of this journal. In this experiment, each member took their own heart rate twenty-one times over the course of a 4-day hiking expedition, on mountain slopes of varying gradients. The data sets of all 6 members’ heart rates and the group mean heart rate against gradient were analysed for a positive linear correlation, and then further tests were undertaken to investigate a causal link. Due to external restrictions, any slope, for example a hill road or track, was modelled as a triangle; each point of measurement’s grid reference was noted and the average gradient was taken as the contour density on the path, calculated as ascent or descent / distance × 100 where distance, the distance walked during measurement, is calculated using online OS mapping software. The heart rate readings were taken without standing still, by counting the number of beats from the chest or wrist for 30 seconds, and then multiplying by 2 to obtain a “beats per minute” (bpm) value. Each day, the readings were taken over the course of the entire day. Other factors, such as weather and path surface descriptors were also recorded at each location, across a variety of

Results overleaf

36

WWW.YSJOURNAL.COM I ISSUE 21 I 2018

conditions to ensure that the experiment was randomised. Nevertheless, these factors were still statistically tested. Analysis of the causal relationship used the following minimum “criteria”, which have been adapted from a widelyused system designed by epidemiologist Austin Bradford Hill[11]: • • • • • • • •

Strength: the coefficients of correlation and determination are high and significant; Consistency: the result is the same when done by/ at different people or places; Specificity: there is no other likely factor or explanation; Temporality: the effect occurs after the cause; Gradient: an increase in the cause leads to an increase in the effect, and vice-versa; Plausibility: there is a plausible mechanism between cause and effect; Coherence: the relationship is compatible with relevant theory; Analogy: relationships between similar causes and effects exist.

These criteria will be discussed in the analysis section of the article. Since all measurements were solely human or computerbased and not instrument-derived, any inevitable error would be small, meaning that the measurements are considered accurate. The experiment is very repeatable and the analysis was based on the mean of 6 individuals to ensure reliability.

Analysis†

Here, each of Hill’s causation criteria is assessed based on the analyses of the collated data. Strength The correlation of mean heart rates to gradients is strongly statistically significant (P<0.0005 when r=0.86 from Table 1) – this is shown in the clear linear relationship on the graph (Figure 1). Similarly, the coefficient of determination is high (R2=0.745) and there are no excessive outliers, ensuring the correlation’s strength. From Figure 1, the gradients and heart rates can be assumed to follow the bivariate normal distribution. Consistency The same result was consistently shown across all 6 individuals of the group (where P<0.0005 for all members, except for one where P<0.001).

Gradient It also follows that there is an increase in heart rate when the gradient increases, and vice-versa. Specificity Any possible third causal factors were also tested. Heart rate data in different surface conditions (smooth, bumpy) and weather conditions (sunny, rainy, cloudy) were compared, and 2-sample independent unpaired t-tests were undertaken to check for a significant difference


RESEARCH / HILLWALKING AS CARDIOVASCULAR EXERCISE

Table 1 (Above): Correlation analysis of heart rate group means vs. gradient of slopes. Raw data is shaded, and test statistics are coloured yellow. The residual is how far the heart rate predicted by the best fit line is from the actual value. Table 2 (left): Summary of heart rate residual means data grouped by weather / surface condition. Std. Dev. represents the sample standard deviation of the group. Table 3 (left): t-tests undertaken based on Table 2

Figure 1 (Above): Graphical representation of means vs. gradients (the best fit line and its equation are shown). 2018 I ISSUE 21 I WWW.YSJOURNAL.COM

37


HILLWALKING AS CARDIOVASCULAR EXERCISE / RESEARCH

Figure 2: Graph from sample calibration exercise.

between the group means. The alternate hypothesis for each test was that d≠0, where d is the difference between the means. As indicated by Table 3, all test statistics were not significant (P>0.05) and hence there is insufficient evidence to accept the alternate hypothesis, and to suggest a difference in the heart rates under different environmental conditions. A similar association test on the time of day was done and the outcome was also negative (P>0.05 when rs=0.248 from Table 1). This suggests that there are no other likely factors that could contribute to the relationship; hence, the direct causal link is valid. Temporality Naturally, temporality is maintained as the existence of a slope precedes the measurer. Plausibility, Coherence and Analogy The heart rate-gradient link is understandable and plausible from a physiological point of view, and is coherent with the findings of scientists and athletes alike (as discussed above). In addition, the effect is observable and analogous with any sample of people travelling up and down hills.

Discussion

Since all of the Hill criteria for establishing a causal link between two variables were satisfied, as demonstrated in the previous section, the relationship is very likely (although not certainly) a direct cause-and-effect link rather than one that involves such a third factor (i.e. weather, surface, time of day). The results therefore support the experiment’s research hypothesis, that the body’s heart rate is positively and directly affected by the gradient of the slope of the path at any point during hillwalking. One can hence conclude that mountain/hill exercising is beneficial for the body, as this heart rate increase will lead to better cardiovascular health (as mentioned previously), and that there are many practical uses of the result.

Application of Results

Having established the quantitative causal link between heart rate and steepness of slope, it is now possible to exploit the relationship to achieve some health-related practical benefits. For instance, a hiker would like to be able to measure his heart rate during walking so that he is aware of how hard 38

WWW.YSJOURNAL.COM I ISSUE 21 I 2018

his heart is working, and whether it is working optimally. First, a calibration exercise is done, where he goes out and obtains a few measurements of his pulse on different gradients (see Figure 2). Since the relationship is linear, it can be expressed as the equation of the best fit (least squares regression) line: y = Mx + C

where y is someone’s heart rate at slope gradient x, and M and C are ‘calibration constants’ unique to each person.

Using this data, a best fit line can be plotted and he can obtain his own calibration constants M and C using simple statistical techniques.[12] In this example, M=147.03 and C= 104.59.

Now on future walks, whenever the hiker wishes to measure his heart rate simply (or perhaps automatically), he can easily do so by using the aforementioned formula. This can be done automatically in a smartphone application, where a GPS service measures the slope gradient. This way, his heart rate can be continuously logged throughout the exercise, which visibly is more faithful and personalised than a tracker application that guesses these statistics.

Conclusion

Heart rate measurements of six teenage hillwalkers on different hill gradients were analysed to see if there was a direct causal relationship between heart rate and the gradient of the slope. Based on an assessment of the data against the criteria of a widely used system for establishing causation, it was found that the correlation was strong, and the effect of other likely factors (weather, path surface, and time of day) were insignificant. Thus, it was concluded that an increase in gradient of the slope almost certainly causes an increase in heart rate, and vice-versa. This not only proves the conjecture, but provides a quantitative way of representing the relationship between heart rate and gradient of a slope. Walking (and by extension, exercising in general) on slopes increases a person’s heart rate, leading to cardiovascular health benefits. This result has many practical uses, for example to individually maximise workout intensity or to give a user


RESEARCH / HILLWALKING AS CARDIOVASCULAR EXERCISE more accurate, personal bodily statistics, using linear formulae incorporating unique calibration parameters. This also paves the way to autonomous body data collection, useful for the future of medical diagnosis and biomedical technology.

References

individual and human species.” AGE 34 (4): 1001– 1009. doi:10.1007/s11357-011-9274-9. 11. Hill, Austin Bradford. 1965. “The Environment and Disease: Association or Causation?” Proceedings of the Royal Society of Medicine 58 (5): 295-300. 12. Rumsey, Deborah J. 2011. Statistics For Dummies. 2. For Dummies.

BIOGRAPHY

1. Kelliher, Steven. 2017. Heart Rate When Climbing a Hill. Accessed August 2, 2017. http://woman. thenest.com/heart-rate-climbing-hill-21574.html. 2. American Council on Exercise. 2017. Monitoring Exercise Intensity Using Heart Rate. Accessed August 3, 2017. https://www.acefitness.org/acefit/ healthy_living_fit_facts_content.aspx?itemid=38. 3. Bo-Ae Lee, Deuk-Ja Oh. 2016. “The effects of longterm aerobic exercise on cardiac structure, stroke volume of the left ventricle, and cardiac output.” Journal of Exercise Rehabilitation 12 (1): 37-41. doi:10.12965/jer.150261. 4. myDr. 2010. Aerobic exercise: the health benefits. January 11. Accessed August 16, 2017. http:// www.mydr.com.au/spor ts-fitness/aerobicexercise-the-health-benefits. 5. Myers, Jonathan. 2003. “Exercise and Cardiovascular Health.” Circulation 107 (1). doi:10.1161/01.CIR.0000048890.59383.8D. 6. Robert Havasy, Tim Hale. 2013. The Method Behind the Magic: How Trackers Work. Accessed August 28, 2017. http://www.wellocracy.com/2013/10/ method-behind-magic-trackers-work. 7. Vulcan, Nicole. 2017. Heart Rate When Climbing a Hill. Accessed August 17, 2017. http://livehealthy. chron.com/heart-rate-climbing-hill-6980.html. 8. William J. Banz, Margaret A. Maher, Warren G. Thompson, David R. Bassett, Wayne Moore, Muhammad Ashraf, Daniel J. Keefer, Michael B. Zemel. 2003. “Effects of resistance versus aerobic training on coronary artery disease risk factors.” Experimental biology and medicine 228 (4): 43440. doi:10.1177/153537020322800414. 9. NHLBI. 2016. Who Is at Risk for Coronary Heart Disease? June 22. Accessed August 16, 2017. https://www.nhlbi.nih.gov/health/health-topics/ topics/cad/atrisk. 10. Geoffroy Berthelot, Stéphane Len, Philippe Hellard, Muriel Tafflet, Marion Guillaume, JeanClaude Vollmer, Bruno Gager, Laurent Quinquis, Andy Marc, Jean-François Toussaint. 2012. “Exponential growth combined with exponential decline explains lifetime performance evolution in

Andrew Wang, 17, UK

Andrew Wang is in year 13, studying Maths, Further Maths, Physics and French at The Manchester Grammar School. Andrew enjoys eating poached eggs and catching epic sunrises on morning runs, and is currently training for the upcoming Manchester Half Marathon.

2018 I ISSUE 21 I WWW.YSJOURNAL.COM

39


TESTING THEORIES OF THE ORIGIN OF LANGUAGE ON INDONESIAN / RESEARCH

Testing Theories of the Origin of Language on Indonesian Daniel Friedrich (18) tests historical theories of the origin of human language on the Indonesian language.

Abstract

Many models of how the sound of the words is affected by their meaning have emerged in the history of linguistics. The first theories searched for a connection between meaning and simple sound resemblance or our emotional reactions. Since these theories weren’t founded on much empirical data, they were a sitting duck for criticism from analytical linguist Ferdinand de Saussure at the dawn of the 20th century and, for many years, any kind of connection between sounds and their meaning was disputed as arbitrary. In recent years, this view has being found to be too insular, as it doesn’t consider more complex sound patterns. This work deals with the history of thinking about the origin of language and its implications. First, I will discuss the most famous hypotheses of the origin of language that had floated to the surface throughout the history of the debate. First, I will focus on the early theories such as ding-dong, pooh-pooh and bow-wow hypotheses. I will then highlight their shortcomings which had been pointed out by Saussure’s structuralism and Chomsky’s theory of universal grammar. Each of these eras of linguistics will be put in context of the hypothesis of symbolism according to which there is a natural connection between the meaning of the word and its physical form. Finally, I will attempt to find out whether symbolism could operate via the sound similarity via an association in the neural signals or via the neuro-motorical imitation of the word’s concept using data drawn mainly from internet surveys to test the potential role of symbolism in Indonesian words. In doing this, I will try to find out what the cultural and language background of the respondents play and if the responses could be affected by actual symbolistic relationship.

Historical Theories on the Origin of Language

The oldest serious debate about the origin of language can be found in Plato’s writing Cratylus. In the work (which recounts a certain dialogue between himself and Cratylus), Plato argues for conventionalism, according to which, a word expresses anything that is a convention. Similarly to names of people or places, Hermogenes claims that all words arise with associating concepts with new – random – sounds.[1] On the other hand, according to Cratylus, there are patterns in the words, that show their deeper meanings. Cratylus says that there are names that fit well to their meaning and those that fit worse and therefore there must be names that are perfectly adapted to what they express – and those are supposed to be the original words, the platonic pure forms that have been given to people through gods.[2] This dogma of the transcendental origin of language persisted among European thinkers for the next 2300 years. Substantial progress came only in the 19th century, when people started to recognize the diversity of the world‘s languages. Among philologists, several theories on the origin of language began to spread, named by Max Müller according to their prototypical word: According to the bow-wow hypothesis, human communication began with mimicking the sounds of nature. The bow-wow hypothesis says that this simple howling and whistling developed into settled forms (interjections) 40

WWW.YSJOURNAL.COM I ISSUE 21 I 2018

which subsequently gave rise into the complex language system we know today. The pooh-pooh hypothesis stated that there are natural emotional reactions in the roots of human words like laughter, cries of pain or sighs.­[3] Max Müller himself initially defended the ding-dong hypothesis, according to which the meaning of the word is expressed in the overall harmony of the word. This is illustrated by Figure 1 where we can see that in words like buzz, sound or jingle, we use more voiced sounds than in words like snap, screech, and thwack.[4] At the beginning of the 20th century, those theories were swept away by Ferdinand de Saussure’s theory of structuralism. Saussure noticed that the spoken language is only one part of the complex system of associations we call language. When we put together a message, all we do is combine some words, symbols, or gestures with a certain association in a given culture. But to effectively pass on the information, we have to follow very strict rules, which were described by Saussure with complex equations. For symbolism, this means that words in languages are defined negatively – their meaning is based on what they are not and the words are only used if they have a unique meaning in the language.[5] According to Saussure, it is not possible to find the key connection between words and their meanings because they are arbitrary and onomatopoeia


RESEARCH / TESTING THEORIES OF THE ORIGIN OF LANGUAGE ON INDONESIAN

Figure 1 (Left): According to dingdong hypothesis, meaning of the world is coded in the overall harmony of the word. The image shows the contrast between words describing sounds using voiced consonants and those using rather voiceless consonants.

Figure 3 (Below): Horses make very different sound in English, Japanese and Punjabi but in all cases we can see, the quick change of the place of articulation of the used consonants copying the actual sound, which indicates sound symbolism is dependent on the specific sound system. Figure 2 (Left): The wug test. When we ask children to create a plural form of a previously unknown word, we can observe that children actually understand their grammar rules of their language.

Figure 4 (Above, Middle): The kiki-bouba effect. When we ask respondents from any culture to assign each shape one of two names, they tend to describe the sharp shape (left) as “kiki” and the round shape (right) as “bouba” rather than the other way round. and interjections are just exceptions, as they describe exclusively sounds.[6] In 1960’s, Noam Chomsky developed the theory of Universal Grammar. On the basis of simple experiments with children similar to that shown in Figure 2, Chomsky noticed that the Saussure’s complex equations can ​​ be largely simplified. Chomsky discovered that there exists a simple, inherent, neural system that allows us to comprehend the basic concepts of language. Thanks to this system, children can quickly start to recognize the important information in the sound; Chinese kids learn to focus on the melody, English kids learn to focus on the individual phonemes. In practice this means that a 2.5 year old child is already “tuned” to the sound space of its language. [6] Sound symbolism should not be therefore based on the specific phonemes but rather on their contrast. [7] As illustrated in Figure 3, there are very different names for the sound of a horse walking in Japanese, English, and Punjab; however, in all cases, we hear the sound symbolism in quickly changing place of articulation. Contemporary theories of sound symbolism are therefore more concerned with the principles upon which we perceive the words, rather than upon the emergence of the specific phonemes.[8] The most famous evidence that some patterns exist is the kiki-bouba effect. If we ask respondents which of the shapes depicted in Figure 4 should be called kiki and

which bouba, 95% of them, regardless of culture, respond that the sharp shape should be called kiki.[9] As we will see, this so-called shape-symbolism emerges in the contrast of two opposites in many languages. Today we know that human languages have been shaped according to which forms of the words are remembered by children the best, according to which best fit to our neural structures and according to which are the most likely to correctly pass on the information. [8]

Studying the Origin of Language on Indonesian

The point of my study was to test the basic theories of the origin of human language. Indonesian was chosen as a model language because as an old lingua franca of Austronesia, it could resemble the austronesian protolanguage. Furthermore, as an analytic language, many of the original word forms are preserved. In principle, the experiment was very simple: 16 basic Indonesian words were chosen and 243 Czech respondents were asked to guess their meaning. If the respondents were significantly more successful than if they guessed the words randomly, it would indicate there are certain patterns in our perception of the sounds of words similar for both of the languages. Three main aspects of sound symbolism were tested: the ability of the respondents to guess meanings of the words in general (existence 2018 I ISSUE 21 I WWW.YSJOURNAL.COM

41


TESTING THEORIES OF THE ORIGIN OF LANGUAGE ON INDONESIAN / RESEARCH of the sound symbolism), the ability to connect the right emotions with the words (the pooh-pooh theory) and and the ability to distinguish a word from others with a similar meaning as could be derived from structural linguistics. For testing the emotional connection, a set of 10 emotions based on the Eckman’s 7 basic emotions was chosen and observed whether the emotions claimed by Czech speakers matched those assigned by the Indonesian speakers (87 respondents). The respondents’ ability to follow Indonesian grammar rules and to assign the meaning of the word to both abstract and concrete images was also tested, which were connected with the chosen words by native Indonesian speakers beforehand.

Results

meaning from a random set of words, but the difference is not big enough because the choice of the “similar” words had to be subjective and the respondents who were choosing the equivalent from a set of others from the 16 words could have used the other words to gain more information. The data even showed the kiki-bouba effect on the contrast of the words kasar and halus (rough and smooth) (Figure 8). The correct side of the spectrum rough-stift-soft-smooth was correctly assigned by 79 % of the respondents, which at least confirms the effect of the words in the environent on shape symbolism. What is more, English respondents who were allowed to change their decisions were 10 % better at assigning the meaning, which also supports the kiki-bouba effect.

When testing the ability to connect images with the words (Figure 5), the intention was to avoid the error in translation of the Indonesian concepts. The reason for that was that for example the image of a waving fabric would be closer to the Indonesian word for “smooth” (halus) than the English equivalent. However, from the data it came clear that my hypothesis does not apply, but not due to the lack of sound symbolism, but because the respondents were more successful at assigning meaning to the more concrete images. Nevertheless, the respondents were on average 6 percentage points above randomness, which still does indicate a certain role of sound symbolism (Figure 6).

Conclusions and Discussion

Thus, sound symbolism played a much bigger role when the respondent guessed the meanings from a set of 4 English equivalents – overall, the respondents were 14 percentage points more successful than if their guesses were random (Figure 7). Also, the data shows support for the hypothesis that a language’s sound system is crucial for the pattern recognition because there were distinct trends in the wrongly assigned words as well, depending on the language of the respondents. In sum, the Czech speaking respondents were better than English speaking respondents by 10 percentage points – it can be presumed this is because the Czech sound system is closer to the Indonesian than the English.

With regard to the first point, when we compare results of this study to other studies on sound symbolism, we find that results of this study don’t significantly differ from previous studies.[9][10] Indonesian, as an analytic and very open language has been shown to be a relatively rich source of sound symbolism before as well.[11]

Data concerning emotional link between the words and their meanings shows no significant correlation, even for the emotionally strongest words. Although the overall chance of determining was positive (+ 1%), from the point of view of the data from the translations, it can be assumed that even the correctly determined words do not bear an emotional connotation. For example, the word kasar (rough) has been rightly assigned not to be connected with joy (-11 %) and to be connected with anger (+4%). However, this apparent pattern is there likely because the respondents “heard” the connotation with roughness itself – the meaning of kasar was the third best guessed while when we take into account all the emotions Czechs assigned to kasar, the correlation is even negative. The most interesting results, however, came from testing structuralism. The chance to guess meaning of a word from a set of words with similar meaning was 6 % above randomness, which might seem like it is easier to guess the 42

WWW.YSJOURNAL.COM I ISSUE 21 I 2018

Judging from the data acquired, it can be inferred that: •

• •

There is a significant above-random chance (+14%) of a person who does not know a language to guess the meaning of the words in this language (sound symbolism). The hypothesis of the straight emotional effect does not apply (the pooh-pooh theory). Sound symbolism is distinct when distinguishing 2 opposites and the most distinct when we are allowed to see both of them at the same time.

The second point is based on the statistical discrepancy between the idea of emotional connection and the actual data from this study. Not many studies have been regarding this topic but data from this study do not show any patterns to discuss. The third point is a widely known fact mentioned in almost all studies about kiki-bouba effect.

References

1. Plato. Kratylos. Accessed July 27, 2017: https:// www.ulozto.cz/xLJzLqNf/platon-kratylos-pdf 2. Anderson, Earl. A Grammar of Iconism. Fairlegh Dickinson University Press, 1998 3. Müller, Max. Lectures on The Science of Language. Oxford, 1861 4. Åsa, Abelin. Studies in Sound Symbolism. Oxford, 1999. Accessed March 3, 2017: http://www.ling. gu.se/~abelin/ny%20inlaga.pdf 5. Fry, Paul. Introduction to Theory of Literature (ENGL 300). Yale University, 2009. Accessed March 17, 2017: http://oyc.yale.edu/english/engl300/lecture-8 6. Saussure, Ferdinand. Course in general linguistics, Chicago: Open Court Publishing Company, 1986 7. Chomsky, Noam. Language and Mind. Cambridge University Press, 2006. Accessed March 15, 2017: http://www.ugr.es/~fmanjon/Language%20


RESEARCH / TESTING THEORIES OF THE ORIGIN OF LANGUAGE ON INDONESIAN

Figure 6 (Above): The ability to assign the correct sounds to images (percentage of correct guesses above expected percentage if sounds were chosen at random)

Figure 5 (Above): Design of the Experiment Figure 7 (Right): The ability to assign the correct translation from a choice of 4 other words from the set (percentage of correct guesses above expected percentage if sounds were chosen at random)

Figure 8 (Left): The ability to pick the correect translation for ‘kasar’ (rough) and ‘halus’ (smooth) (percentage of correct guesses above expected percentage if sounds were chosen at random)

BIOGRAPHY

and%20Mind.pdf 8. Fitch, Tecumseh. The Evolution of Language. Cambridge University Press, 2010. Accessed March 15, 2017, http://www.genlingnw.ru/board/ attachments/fitch_evolution.pdf 9. Ahlner, Felix. Cross-modal iconicity: A cognitive semiotic approach to sound symbolism. Sign Systems Studies, 2010 10. Körtvélyessy, Lívia. A Cross-Linguistic Research into Phonetic Iconicity., Lexis [Online]. 6 | 2011, Online since 27 March 2011. Accessed 07 September 2017. URL : http://lexis.revues.org/409

11. Yoshida, H. (2012). A Cross-Linguistic Study of Sound-Symbolism in Children’s Verb Learning. Journal of Cognition and Development : Official Journal of the Cognitive Development Society, 13(2), 232–265. http://doi.org/10.1080/15248372 .2011.573515

Image Sources

Figure 4: Wikipedia, User:Bendž

Daniel Friedrich, 18, Czech Republic

Daniel Amos Friedrich is a highschool student from Prague, Czechia. This article is a summary of his project for a local national science contest, where he placed 11th competing in the category “theory of culture”. Last year, he was publishing in his school comics about evolution principles and he wrote several articles for OSEL.cz. He is interested in cognitive science and evolution. 2018 I ISSUE 21 I WWW.YSJOURNAL.COM

43


THE IMMINENCE OF THE NOSOCOMIAL PATHOGEN A. BAUMANNII / REVIEW

The Imminence of the Nosocomial Pathogen Acinetobacter Baumannii Devansh Kurup (17), Taha Hassan (17) and Owais Fazal (18) explain the background of Acinetobacter baumannii and its upcoming therapies.

Abstract

Though not discussed heavily in major media outlets outside of medical journals, the nosocomial (hospital-acquired) bacteria Acinetobacter baumannii (AB) has been gaining resistance to commonly prescribed antimicrobials. Also known as multidrug resistant (MDR) bacteria, these pathogens sources a multitude of infections in already-ill hospital patients as antibiotics are used to no avail. Such resistance can be briefly explained by the use of several transmembrane proteins of the bacteria to expel antibiotic threats while simultaneously allowing for rapid genetic mutations. While not an immediate threat to the global health community, Acinetobacter baumannii and additional MDR pathogens necessitate new and effective solutions that are able to overcome future multidrug resistance. As of yet, upcoming antibiotic therapies are still being extensively researched, but they are reaching several degrees of successful treatment, as well as new techniques, including monotherapies and synergistic drug use. It is imperative that further research into effective, and most importantly, long-term treatments to MDR pathogens be conducted in the near future to avoid a global health crisis that may have everlasting consequences on the healthcare industry and human health.

A Historical Perspective

Acinetobacter baumannii is an aquatic, aerobic, pleomorphic, gram-negative, bacillus that prefers to colonize aquatic environments; it can often be found inside intravenous or irrigating liquids and embed infections in the sputum, respiratory, urinary tracts or in wounds[1] Surprisingly, they are most common in intensive care units as opposed to medical or surgical wards.[2] The history of the aforementioned strain of bacteria stems from the 44

WWW.YSJOURNAL.COM I ISSUE 21 I 2018

discovery of the genus Acinetobacter in 1911, when Dutch microbiologist Beijerinck isolated an organism called Micrococcus calcoaceticus from a calcium-acetatecontaining sample of soil; scientists Brisou and PrĂŠvot later separated the non-motile microorganisms from the motile ones to establish the genus Acinetobacter.[3] Later on, the species A. baumannii was established in 1986 as it was the most common strain in clinical samples.[2] In April 2013 in the outreaches of western China, 36 workers were infected


REVIEW / THE IMMINENCE OF THE NOSOCOMIAL PATHOGEN A. BAUMANNII with ammonia poisoning during an ammonia leak incident, with five of them containing pulmonary infection and chemical pneumonitis; these five were then isolated, where 2 of the 5 patients died later on.[4] Furthermore, the research article written by Junyan Qu and his cohort explicated on how the samples of these five patients revealed an outbreak of AB due to possible trauma, hyperimmunity, or the coinfection of other bacteria; this outbreak was the first of this specific species of the genus Acinetobacter but it was fortunately contained by the hospital staff.[4] Another alarming concern arose after the Institute for Hygiene and Infectious Diseases of Animals in Giessen, Germany, receive 137 veterinary samples from universities pertaining to the related 137 hospitalized animals; the discovery of AB in these samples revealed that the aforementioned strain of bacteria can also be a nosocomial pathogen in not only humans but also animals, raising concerns for cross infections between animals and humans.[5] These recent incidents, in addition to many more unmentioned ones, has raised concerns about Acinetobacter baumannii and the presence this species of bacteria has to affect patients in medical setting. In addition to their threat in hospitals and ICU’s, AB is an invasive and rapidly adapting pathogen that can easily evade many antibiotics, utilizing mechanisms, several of which will be discussed later on in this article, to grow and spread.

A scanning electron micrograph of a highly magnified cluster of Gram-negative, non-motile A. baumannii bacteria.

A Clinical Perspective Classification and Background

Acinetobacter baumannii is an aerobic, non-fermenting, gram-negative Coccobacillus, emerging as a prominent nosocomial pathogen with an inherent resilience to commonly prescribed antimicrobials, often acting as an opportunistic pathogen.[6] In simpler terms, AB causes hospital-acquired infections by thriving in weakened immune systems, which are readily available in common surgical rooms and ICUs in the form of secretions, surgical incisions, or wounds. Acinetobacter can inhabit part of the bacterial growth of the skin, particularly in moist regions such as the axillae, groin, and toe webs, and up to 43% of healthy adults can have colonization of skin and mucous membranes.[7] In fact, colonization is higher among hospital personnel and patients due to skin ruptures and wounds, as it is part of the nature of healthcare. Moreover, AB typically colonizes aquatic environments,

leading it to deal serious damage to high-fluid organs that lie in the respiratory tract and the urinary tract, and is often isolated from respiratory support equipment.[8] In regards to these characteristics of A. baumannii, a gramnegative Coccobacillus refers to the cellular composition of the bacterium; specifically, this bacterium contains a thin peptidoglycan cellular wall sandwiched between two bacterial membranes. A Coccobacillus bacterium is the combination between a cocci (spherical-shaped) and bacilli (rod shaped). A. baumannii distinguishes itself as aerobic and non-fermenting, meaning that it survives by utilizing oxygen as the main source to produce energy (in the form of ATP) in its mitochondria; however, it is not able to survive in an anaerobic environment via fermentation. This species falls under the class of Gammaproteobacteria, and is prevalent in soil and water as a mineralizer (which explains its preference for high-fluid organs). In addition, the genus currently comprises of 34 species of which A. baumannii holds the greatest significance in the clinical aspect.[9] Infections caused by AB include blood-stream infections, urinary tract infections, meningitis, ventilator-associated pneumonias, and wound infections; many of which can be indirectly and directly linked to the colonization of AB in intravenous solutions, ventricular drainage tubes, use of a central venous catheter, and several other factors that work in tandem with the bacterium’s aquatic preferences[6] [8] In addition, the nature of Acinetobacter baumannii makes it one of the six most important multidrug-resistant microorganisms in hospitals worldwide, as the majority of its infections are caused by two main population clones with worldwide distribution.[10] These characteristics demonstrate how prevalent MDR bacteria can be and how paramount discovering an effective and unique solution should be. Epidemiology For several decades, antimicrobials have been used to treat a wide variety of bacterial infections. However, such drugs have been overused, causing a rise in the phenomenon known as multidrug resistance. Acinetobacter baumannii is famously known as one of the leading nosocomial pathogens that exhibit such resistance to antimicrobials. On a general scale, several studies have demonstrated that the crude mortality rates in patients with AB varied between 30 and 76%, and the factors that contributed to an undesirable prognosis include immunosuppression, severity of underlying illness, inappropriate antimicrobial therapy, septicemia, prior antibiotic exposure, and most notably of all, drug resistance.[6] Efforts to relieve these mortality rates have led to intense research done on the nosocomial epidemiology of A. baumannii. In a 1998 study, conducted by Dr. Daniel Villers, designed to assess the epidemiological risk AB, the researchers concluded that such nosocomial infection was a complex coexistence of endemic and epidemic infection, where the endemic infections were favored by the selection pressure of intravenous fluoroquinolones (or any antimicrobial drug for that matter) and the epidemic was primarily caused by the constant use of a single operating room in the tested hospital.[11] Inspecting this issue through a global lens, Italy is one of the European countries with increasing spread of antimicrobial-resistant microorganisms, often resistant to multiple drugs and with high antibiotic consumption in 2018 I ISSUE 21 I WWW.YSJOURNAL.COM

45


THE IMMINENCE OF THE NOSOCOMIAL PATHOGEN A. BAUMANNII / REVIEW a hospital setting. Several factors, such as antimicrobial consumption, colonization of resistant bacteria, resistance mechanisms that vary by species, and infection control strategies, including screening policies, may play a role in the prevalence of antimicrobial-resistant pathogens in the hospital setting.[12] In addition to Southern Italy, a study conducted in North China, over a 65-month period, measured the molecular epidemiology of a specific gene in A. baumannii, and concluded that, because of its multidrug resistance, the specific gene transformed the bacteria into highly infectious in the hospital setting.[13] Furthermore, one feature of AB is its ability to cause outbreaks, as seen in several cases. One such case occurred in France from July 2003 to May 2004, in which 290 cases of infections and/or colonizations occurred in 55 health care facilities located in 55 different departments.[14] From a group of 217 patients for whom clinical data were available, 73 (33%) suffered from infection and 144 (67 %) were colonized. Among the infected patients, there were 19 (26 %) deaths that were directly attributed to AB infection. [14] Similarly, in a study conducted from April to November of 2014, a certain burns unit saw 77 patients acquiring MDR-AB at a rate of 30% after 28 days, and the median hospital stay was increased due to said infection by 12 days.[15] In all of the MDR-AB isolates from the patients, a specific gene (blaOXA-23) was recognized as contributing to the bacteria’s environmental resistance. Moreover, in a 1999 study done on a group of 192 healthy hospital volunteers, 40% of the volunteers contained a strain of Acinetobacter on at least one body site.[16] In the majority of these studies, antibiotic consumption as prescribed by physicians, while in good faith and intention, was in excess enough to encourage rapid colonization of multi-drug resistant Acinetobacter baumannii. In addition, the infectious nature of AB has been noted partially because of its habitat in soil and water, which in turn introduces the bacteria to a local food supply, as food is known to be a source for gram-negative rods, such as Escherichia coli. According to a study conducted by John Berlau and colleagues on the frequency and distribution of the Acinetobacter genospecies, 17% of vegetables sustained Acinetobacter in small numbers, and A. baumannii was one of the species more frequently found.[14] The A. baumannii—A. calcoaceticus (another opportunistic nosocomial pathogen of the genospecies Acinetobacter) complex accounted for 56% of all strains isolated from fruits and vegetables and were found in apple, melon, beans, cabbage, cauliflower, and several other commonly consumed produce.14 According to Berlau et al., hospital food could be a potential incubator for A. baumannii. The more alarming issue with this result stems from research done on digestive-tract colonization in patients in a hospital setting, with colonization rates reaching as high as 41% in ICUs.14 In order to relieve the patient of any further debilitating infection, commonly-prescribed antibiotics, such as carbapenems, are utilized to no effect. The resistance rates of AB to last-resort antimicrobials as carbapenems and colistin are on the rise, and healthcare facilities act as a reservoir for resistant AB.[9] These trends should not only inform current treatment options of serious infections, but also approaches to infection control. 46

WWW.YSJOURNAL.COM I ISSUE 21 I 2018

Infection Control The antimicrobial resistance of A. baumannii has been well researched and documented, leaving hospital physicians and healthcare facilities to implement desperate attempts at infection control. The methods include prescribing ineffective antimicrobial drugs, such as fluoroquinolones and carbapenems. To combat the spread of multi-drug resistant pathogens, the World Health Organization (WHO) implemented the multimodal hand hygiene improvement strategy in 2010. This included a set of management goals (a healthcare facility policy review, a dedicated budget for hand-hygiene agents, and a survey of the staff’s tolerability of alcohol-based handrub), training and education (regular training for all workers, research and collaboration, antibiotic stewardship, and e-learning), evaluation and feedback, and several systemic changes to encourage and/or enforce proper hand hygiene procedures.[17] Studies have concluded that this strategy can be effective in reducing general nosocomial infections if taken upon all staff members. The importance of infection control is demonstrated by the severity of A. baumannii in critically ill patients, in which patients infected with MDR strains of any organism suffer organ dysfunction and longer ICU stays less frequently, as well as facing decreased mortality rates as compared to patients infected with a strain of Acinetobacter (when severity of the illness is controlled). [18] Another method of infection control includes constant surveillance, as seen by a 2013 study conducted at a Seoul national university hospital, where Carbapenem resistant Acinetobacter baumannii (CRAB) was subject to active surveillance culture at an ICU. When active surveillance culture of CRAB and contact precaution for the patients of positive results were applied in a medical ICU, the rate of new CRAB bacteremia was lowered and the time between new CRAB bacteremia and ICU admission was lengthened. [10] This study also utilized effective infection control policies and practices. Another form of infection control can be observed through the local food supply. Widely distributed in soil and water, A. baumannii grows at various temperatures and pH environments and uses a vast variety of substrates for its growth. In nature, Acinetobacter is most commonly found in soil and water, but has also been isolated from animals and humans. Methods to control the use of such infection, such as sterilizing or intensively cleaning produce (for hospital food), or an acute awareness of the environment of AB can greatly reduce the number of cases of infection. However, as this is famously an issue in hospitals, the best form of prevention has to be burdened on the healthcare facility itself.

Multidrug Resistance

General Mechanisms The rapid development of multidrug resistance amongst strains of Acinetobacter baumannii can be accredited to the inherent physiological mechanisms present in this species of the Acinetobacter family. Asides from mutations and intrinsic genetic inheritance to a variety of antibiotics, AB possesses multiple efflux systems that are characteristic of many rapidly developing gram negative bacteria. In an article written by Bruno Périchon, a researcher from the Pasteur Institute, and his peers, it’s thoroughly explained


REVIEW / THE IMMINENCE OF THE NOSOCOMIAL PATHOGEN A. BAUMANNII how efflux pump systems possess multiple transmembrane proteins that can not only expel imminent threats to the strain of bacteria but can also facilitate genetic mutations or new mechanisms to combat antibacterial invaders.[19] So far, their research has uncovered three out of the five bacterial efflux pump families: RND, MFS, and MATE. They cited how each family is organized based on different components, such as the periplasmic adaptor protein, and the different drug resistances that it can develop. Aixin Yan and his fellow researchers at the University of Hong Kong illustrate the physiology and function of the the RND pumps: the pumps generally possess three protomers, or proteins that initiate chemical processes that convert monomers into complex macro molecules through polymerization.[20] The first protomer displays a crystal structure and is widely known to exist in tight conformation (T), as opposed to the second protomer, which exists an open conformation (O) and functions to expel substrates from the bacteria. Yan then delves into the mechanisms that allow RND pump to excrete imminent threat to the Acinetobacter baumannii isolate: the first protomer helix, which is attached to its structure, inclines into the second protomer and blocks the exit of substrates from the substrate pocket, Yan also introduces a third protomer, which exists in loose conformation (L) and possesses a second binding site that is designated for any additional substrates entering the RND pump. Yan’s entry then concludes by expounding on how the three protomers function in a rotational formation: the substrates enters through the top funnel to reach the empty pocket and the efflux pump is coupled with proton transport across the cell membrane to activate the process. Amongst the RND pumps, the first and most prominent one researched by Périchon and his team was the AdeABC system, which consists of the adeABC operon, which encodes the AdeA MFP, the multidrug transporter AdeB, and the AdeC outer membrane factor.[1] The cohort compared natural isolates of AB to the mutated ones in the experiment and realized that the operon is not present in natural strains of AB and therefore are thought to be derived from the overexpression of this pump. The operon expression revealed to be regulated by the AdeR-AdeS protein system whose purpose is to extrude aminoglycosides, β-lactams, fluoroquinolones, tetracyclines, tigecycline, macrolides, chloramphenicol, and trimethoprim. In the team’s experiment, the signal for the transcription of the operon was received by the AdeS protein; however, the role of AdeR and the binding site of AdeS have not been discovered thus far. The protein system’s relation to the adeABC operon was deduced by the researchers to be mutations in the protein system resulting in the expression of the AdeABC system . However, Périchon’s team postulated that a mutation in AdeS induces an inconsistency in the dephosphorylation of AdeR, resulting in an active system. The two other less known RND pumps still play an important role in equipping isolates of A. baumannii with multidrug resistance. Yan’s experimentation shows that the AdeIJK system is correlated exhibiting resistance to β-lactams, such as ticarcillin, cephalosporins, and aztreonam. Another RND efflux pump system, AdeFGH,

is encoded by the operon adeFGH and shows multidrug resistance when overexpressed; it incorporates resistances to fluoroquinolones, chloramphenicol, trimethoprim, and clindamycin and decreased susceptibility to tetracyclines, tigecycline, and sulfamethoxazole without affecting β-lactams and aminoglycosides.[20] Non -RND pumps, like the MFS and SMR efflux pumps, have their structures generalized by research completed by Périchon and his team that covered the EmrD transport protein of E. coli. The crystal structures were shown to be composed of twelve protein helices with four helices facing the exterior of the cell membrane and the remaining helices composing the interior cavity of the transport protein. The internal cavity held hydrophobic residues to allow hydrophobic molecules to pass through the cell membrane. The system followed an alternating access model where antibacterial invaders can enter through either the periplasm or the cytoplasm . The Acinetobacter baumannii holds three main nonRND pumps. CraA (for chloramphenicol resistance Acinetobacter) was found by Yan and his peers to be homologous to the MdfA efflux pump of Escherichia coli, which only transports and confers resistance to the antibiotic chloramphenicol. Yan’s team also uncovered AmvA, an MFS transport system, that possesses a 14 protein transmembrane system to expels dyes, disinfectants, and detergents; the only antibiotic found to be affected was Erythromycin. The third main non-RND pump discovered by the University of Hong Kong’s research team was AbeM, a MATE efflux pump that ejects aminoglycosides, fluoroquinolones, chloramphenicol, trimethoprim, ethidium bromide, and dyes. Other proteins distinct to gram-negative species of bacteria or A. baumanii itself also confer multidrug resistance. In a comprehensive study conducted by Ian T. Paulsen, a student researcher at the University of Leeds in the School of Biomedical Sciences, and his fellow colleagues, Acel proteins were shown to directly interact with chlorhexidine and regulate its expulsion through energy dependent mechanisms relying on either efflux pumps.[21] The study also exhibited how BTP domain proteins assisted in developing resistance to biocides and fluorescent dyes. Anna Sara Levin and her fellow researchers at the Department of Infectious Diseases in the University of São Paulo conducted experimentation to prove that in conjunction with these proteins, A. baumannii strains that carried the blaOXA-51-like, blaOXA-23-like gene, the blaOXA-143-like, or the blaIMP gene display resistance to the antibiotics meropenem, rifampicin, and fosfomycin. [22] Additionally, experimentation monitored by Andrew Carter and his fellow scientists at the MRC Laboratory of Molecular Biology in Cambridge, UK, discovered that genes for aminoglycoside-modifying enzymes in class 1 integrons(gene cassettes that although immobile by themselves, they can be mobilized to other integrons) are rife in multidrug-resistant AB strains.[23] As for antibacterial aminoglycosides, Carter’s experimentation found that rRNA methylation will prevent aminoglycosides from successfully binding to their targets. Another common gene the scientist found is the tet(A) to tet(E) determinants, which can encode multidrug efflux pumps or code for 2018 I ISSUE 21 I WWW.YSJOURNAL.COM

47


THE IMMINENCE OF THE NOSOCOMIAL PATHOGEN A. BAUMANNII / REVIEW ribosomal protection alongside tet(K), tet(O), and tet(M) When the cohort mutated the genes gyrA and parC genes, results showed that quinolones were prevented from binding to their targets. Specific Drugs Despite the various general mechanisms already present amongst A. baumannii strains, there are still many antibacterial agents that can prove effective. However, many isolates of AB manage to undergo posttranslational modification by their enzymes in order to counteract antibacterial invaders. MRC Laboratory described polymyxins as positively charged antimicrobial peptides that target the anionic lipopolysaccharide molecules in the outer cell membranes of gram-negative bacteria, leading to disassociation between the outer cell membrane and inner cell membrane as they begin to directly interact, eventually leading to cell death. [23] The Department of Microbiology at Monash University entailed how resistance to polymyxins is developed in strains of A. baumannii through modifications to the lipid A, which will diminish the charge based interaction between polymyxins and lipopolysaccharide molecules.[23] Carter and his team noted that even though some strains can deploy resistance mechanisms to nullify this type of interaction in lipid A, which is an endotoxic component of the outer lipid membrane that combats antibacterial agents in many gram negative bacteria, resistance to these types of antimicrobial peptides are extremely rare. Monash’s microbiology department also detailed the interaction between A. Baumannii isolates and colistin, which differs from polymyxin B by a mere amino acid. This type of peptide exhibited a similar mechanism to polymyxins by which it binds and then permeabilizes the outer membrane, disassociating the outer cell membrane and leading the inhibition of the cell membrane function. This peptide makes this gram negative strain of bacteria vulnerable as an essential part to the survival of many gram negative bacteria is to form a highly selective barrier to potential antibiotics. In response to colistin, AB isolates have developed various responses to evade the effects of this peptide. Experimentation by Crystal L. Jones and her colleagues at the Walter Reed Army Institute of Research produced results that proved that an addition of phosphoethanolamine and galactosamine through posttranslational modification done by the bacteria’s enzymes instills a mechanism that helps this portion of the cell membrane combat colistin and other drugs.[24] Jones’s team certified that the mechanism for the insertion of phosphoethanolamine and galactosamine was indeed posttranslational modification and not a genetic mutation as there was no evidence of a nucleotide polymorphisms (a kind of genetic sequence variation). Another modification was connected to the synthesis of lipid A. The gene lpxA is hypothesized by Carter and his research team to code for the acetylglucosamine acyltransferase which initiates the first step in the synthesis of lipid A; mutations in this gene can prevent the synthesis of lipid A, thus producing colistin resistance as no charged based interactions can occur with the positively charged peptide. However, as discussed 48

WWW.YSJOURNAL.COM I ISSUE 21 I 2018

earlier, the lack of lipid A will reduce the fitness of colistin resistant strains. Birgit Schellhorn and fellow student scientists at the University of Basel found Acinetobacter baumannii to be sensitive to sodium tellurite, although the expression of the gene Tpm in many bacteria as a response to sodium tellurite detoxified the chemical through methylation, thus producing the unsustainable dimethyl telluride.[25] On the other hand, the antibiotic tigecycline, although old and considered to be somewhat outdated, was found by the team to easily evade a bacteria’s defenses by inhibiting the 30S ribosomal subunit. This subunit is responsible for proofreading aminoacyl transfer RNAs and disposing those that don’t match the codon of mRNA and it translocates the tRNA with the associated mRNA by a codon, ensuring accuracy in the genetic code.[26] Therefore, the lack of a functional 30S subunit can lead to errors in the genetic message of AB strains, which may possibly result in dysfunctional structures, thus leaving the isolates to be more susceptible to antimicrobial agents. Despite the evasiveness of tigecycline, strains of AB found ways to dispose of this antibacterial agent in the Schellhorn’s team’s experiment, such as the AdeABC mentioned previously. After thorough experimentation, Schellhorn and her cohort concluded that there is another alternative tigecycline resistance mechanism as 60% of isolates that still had the gene adeR deleted still successfully combated the antibiotic. They also assumed that the deletion of nucleotide 311, which leads to a premature stop codon, making this protein variant of Trm ineffective, yet leading to overwhelming tigecycline resistance as the functional Trm strains showed susceptibility to tigecycline, as opposed to the mutant variants, which showed no susceptibility.

The structures of meropenem (left) and colistin (right)

Novel Antibiotic Therapies

Several antibiotic therapies have been proven to be effective against the multidrug resistant A. baumannii, including but not limited to: meropenem, colistin, polymyxin B, amikacin,


REVIEW / THE IMMINENCE OF THE NOSOCOMIAL PATHOGEN A. BAUMANNII rifampin, minocycline, and tigecycline. Monotherapy as well as combination therapies have been utilized in previous years, with varying degrees of successful treatment. As of yet, no form of therapy has been established as the most potent method of combatting the hospital-acquired infection.[27][28] Despite this lack of consensus, one of the newer antibiotic therapies developed to treat Acinetobacter baumannii as well as Pseudomonas aeruginosa, Escherichia coli, and Klebsiella pneumoniae involves the use of tobramycin.[29] Tobramycin is an aminoglycoside that is often utilized for treatment of strains of aerobic gram-negative bacteria, and has shown promising results in terms of empirical antibiotic treatment for AB. Furthermore, another promising aminoglycoside recently tested against AB is apramycin, a drug approved for veterinary use.[30] When tested against amikacin, gentamicin, and tobramycin, apramycin displayed the lowest MIC values. Since the MIC value is essentially the lowest concentration of drug that inhibits growth of a given pathogen, this indicates that apramycin is another antimicrobial agent that must be considered a viable option for treating infections caused by A. baumannii.[31] More recently, an article published in the International Journal of Antimicrobial Agents has claimed that novel antibiotic eravacycline is the most potent tetracycline class drug available to combat Acinetobacter baumannii. Researchers performed antimicrobial testing by broth microdilution of 286 non-duplicate, carbapenem-resistant AB isolates to eravacycline, amikacin colistin, doxycycline, imipenem, levofloxacin, meropenem, minocycline, sulbactam, tigecycline, and tobramycin.[32] At the conclusion of the experiment, researchers determined that eravacycline exhibited the greatest potency against AB, indicating that the antimicrobial may serve as another key addition to the limited collection of drugs available to treat the harmful pathogen. Although much of the research regarding potential treatment options for A. baumannii revolves around the use of a single antibiotic, the reality is that several options that rely on synergistic relationships between various drugs and compounds exist as well. In a relatively recent article published by the American Society for Microbiology, researchers attempted to determine whether classical β-lactamase inhibitors (BLIs) as well as could increase the efficacy of the peptide antibiotic colistin against AB.[33] By combining BLI tazobactam with colistin in order to treat mice infected with the A. baumannii pneumonia, researchers were able to increase synergy that increased kill curves for 4 of 5 strains of AB tested. Since BLIs have minimal antimicrobial activity on their own, their use in combination with peptide antibiotics such as colistin warrants further study. Other experiment that have also warranted further study include Carter’s experiment, where the combination of fosfomycin and amikacin presented 85.7% synergism on AB isolates that possessed the blaOXA-143 gene, with the bacteria’s cell exchanging lipids with the antimicrobials, experiencing subsequent membrane disturbance and osmotic instability, and eventual apoptosis. In the same experiment, colistin sulfate or sodium colistin methanesulfonate orally administered through hematogenous transmission throughout the lungs was observed to be extremely

effective, but inefficient if administered directly up through airway inoculation. Patients administering sodium colistin methanesulfonate through nebulization are advised be extremely cautious with their treatment and must inhale CMS directly after the synergistic combination has been mixed in an aqueous solution; otherwise, the CMS will be converted to colistin by hydrolysis, leaving the patient’s lungs susceptible to bioactive noxious colistin.[36] Yet the most surprising synergistic combination emerged with the conjugation of daptomycin and siderophores. Daptomycin, although not fully understood yet, has been found to land and bind on the bacterial membranes of gram-positive bacteria and disrupt their normal functions by depolarizing the ionic membrane as well as inhibit DNA replication and the production of RNA and proteins in the cells of AB.[35] In an experiment conducted by Patricia A. Miller and her research team at the University of Notre Dame, the conjugation of siderophores (which are iron chelating compounds) with antibiotics much larger than it, such as daptomycin, allow the newly formed chemical compound to bypass the permeability problem it faces at the cell membrane and bind to the necessary targets that are abundant on the cell membrane of grampositive bacteria, but not as common on the bacterial cell membranes of gram-negative bacteria.[35] Because the conjugation of the iron-chelating compound with a large antibiotic requires certain recognition sites and transport proteins for the newly formed chemical compound, mutant isolates will experience no effect from the conjugated compound; however, the survival rate of such mutants were significantly lower in Miller’s experiment, meaning that these mutants possess largely reduced fitness anyways.[35] Although isolates of Acinetobacter baumannii have shown to rapidly mutate and adjust to novel antibiotic therapies, modifications to bacterial isolates have exhibited a loss in fitness and virulence. Despite the adaptation to many of these antibiotics, isolates of AB have exhibited a loss in fitness and virulence due to the genetic or posttranslational modifications that they make. The research team at the Walter Reed Army Institute of Research observed that the colistin resistant strains faced difficulty competing against non-resistant strains as at 48 hours, CR isolates were being outcompeted by their counterparts. However, the CR isolates then displayed recovery and adapted to their environment by increasing their fitness. They hypothesized that this phenomenon was due to CR isolates lacking catalases, which are essential in protecting bacteria from unstable oxygen species by breaking down hydrogen peroxide in the solution.[27] It can be deduced that the period between 0 and 48 hours could be used to exploit the gram negative bacillus by covering the infected area with hydrogen peroxide and then utilizing an assortment of antibiotics and/or antimicrobials to resolve the infection. A similar conclusion was met in a study conducted by Eun-Yeong Jun and her research team at the Pasteur Institute, where overproduction of the AdeABC efflux pump system resulted in an increase to several antibiotics, but as observed before, the excess production of this efflux pump system most likely resulted in a biological cost to isolates that possess this system; therefore, based on their results, the research team postulated that the relatively low 2018 I ISSUE 21 I WWW.YSJOURNAL.COM

49


THE IMMINENCE OF THE NOSOCOMIAL PATHOGEN A. BAUMANNII / REVIEW growth rate amongst these isolates stemmed from excess consumption of energy due to overactivity of these pumps or possible excretion of beneficial molecules to the bacterial isolate.[34] Therefore, it is important to continue funding research to study possible loss of fitness or weaknesses that can be exploited in these type of isolates. Ironically, enzymes created by Acinetobacter baumannii can be used to fight infections caused by the gramnegative bacteria. Bacteriophage lysins, which are present in the lytic cycle of bacteria, cleave through a multitude of bonds that exist in peptidoglycan, which are found in the cell walls of bacteria.[36] Experimentation has been started and continuously regulated by Mya Thandar and her research team to synthesize a chemical compounds that possesses a molecule(s) that allow lysins to bind to the anionic bacterial cell membrane. So far, the research team has experimented with conjugating the lysins with iron-chelating compounds so the lysins can bypass the issue of cell membrane permeability. The cohort also genetically engineered a highly cationic C-terminal by supplementing the P307 amino acid sequence with eight additional amino acids(SQSRESQC), yielding P307SQ8C; two additional modifications were made to change the amino acid sequence to P307SQ-8A. The experiment indicated that without the eight additional amino acids and the modifications, the antimicrobial peptides exhibited a reduced spectrum of activity. The hydrolyzing enzymes were active at a higher pH but not at lower salinity; the group explained this behavior the bacterial membrane losing ions in higher pH, which then resulted in the positively charged peptides to interact with anionic membrane and gain entry into the bacterial cells. The highlight of the experiment was how the lysins killed bacteria in biofilms on the surface of catheters, which limits the efficacy of the opportunistic pathogen. Further experimentation was in conjunction with the previous statement as the addition of the new amino acid sequence significantly decreased the bacterial burden in a murine skin infection.

Conclusion

It is imperative that the global health community design several systems of effective treatment in order to race against multi-drug resistant bacteria, including Acinetobacter baumannii. While its mechanisms of resistance are a seemingly complicated and rapidly diversifying force, the pathogenic genospecies of Acinetobacter contain the ability to rapidly mutate genes that are further resistant to common antimicrobials partially due to the antimicrobialsaturated environment (especially in healthcare facilities). Current antibiotic use generates a specific positive feedback loop, which provides AB with new sources of resistance as allowed by such mechanisms, and this overuse of these drugs must come to a close soon. New discoveries, studies, and field tests of upcoming therapeutic drugs, especially into their synergistic utilizations, should hold the greatest significance in combatting the global rise of multi-drug resistant nosocomial pathogens.

References

1. Cunha, Burke A. “Acinetobacter.” Medscape. March 15, 2016. Accessed July 17, 2017. http://emedicine. medscape.com/article/236891-overview.

50

WWW.YSJOURNAL.COM I ISSUE 21 I 2018

2. Cisneros, Jose M., and Jesus Rodríguez-Baño. “Nosocomial Bacteremia Due to Acinetobacter baumannii: Epidemiology, Clinical Features and Treatment.” Online Wiley Library. November 13, 2002. Accessed June 29, 2017. http:// onlinelibrary.wiley.com/doi/10.1046/j.14690691.2002.00487.x/full. 3. Peleg, Anton Y., Harald Seifert, and David L. Patterson. “Acinetobacter baumannii: Emergence of a Successful Pathogen.” NCBI. July 21, 2008. Accessed June 30, 2017. https://www.ncbi.nlm. nih.gov/pmc/articles/PMC2493088/. 4. Qu, Junyan, Yu Du, Rujia Yu, and Xiaoju Lu. “The First Outbreak Caused by Acinetobacter baumannii ST208 and ST195 in China.” Hindawi. 2016. Accessed July 5, 2017. https://www.hindawi.com/ journals/bmri/2016/9254907/. 5. Zordan, Sabrina, Ellen Prenger-Berninghoff, Reinhard Weiss, Tanny Van Der Reijden, Peterhans Van Den Broek, George Balijer, and Lenie Djikshoorn. NBCI. September 2011, 17. Accessed July 3, 2017. https://www.ncbi.nlm.nih.gov/pmc/ articles/PMC3322069/. 6. Ballouz, Tala, Jad Aridi, Claude Afif, Jihad Irani, Chantal Lakis, Rakan Nasreddine, and Eid Azar. “Risk Factors, Clinical Presentation, and Outcome of Acinetobacter baumannii Bacteremia.” Frontiers in cellular and infection microbiology 7 (2017). 7. Manchanda, Vikas, Sinha Sanchaita, and N. P. Singh. “Multidrug resistant acinetobacter.” Journal of global infectious diseases 2, no. 3 (2010): 291. 8. Cunha, Burke A. “Acinetobacter.” Drugs &Diseases. January 06, 2017. Accessed August 12, 2017. http://emedicine.medscape.com/article/236891overview. 9. Kauffman, Carol. “Acinetobacter Species.” MDedge. August 2017. Accessed August 17, 2017. http://www.mdedge.com/emed-journal/ dsm/1018/infectious-diseases/acinetobacterspecies#sub1_1. 10. Kim, G., B. Oh, J. S. Song, P. G. Choe, W. B. Park, H. B. Kim, N. J. Kim, E. C. Kim, and M. D. Oh. “O047: The effect of active surveillance culture of carbapenem resistant Acinetobacter baumannii on the occurrence of carbapenem resistant Acinetobacter baumannii bacteremia in a single intensive care unit.” Antimicrobial Resistance and Infection Control 2, no. 1 (2013): O47. 11. Ninin, Emmanuelle, Franchise Nicolas, and Herve Richet. “Nosocomial Acinetobacter baumannii infections: microbiological and clinical epidemiology.” Ann Intern Med 129 (1998): 182189. 12. Agodi, Antonella, Martina Barchitta, Annalisa Quattrocchi, Andrea Maugeri, Eugenia Aldisio, Anna Elisa Marchese, Anna Rita Mattaliano, and Athanassios Tsakris. “Antibiotic trends of Klebsiella pneumoniae and Acinetobacter baumannii resistance indicators in an intensive care unit of Southern Italy, 2008–2013.” Antimicrobial resistance and infection control 4, no. 1 (2015): 43. 13. Ning, Nian-zhi, Xiong Liu, Chun-mei Bao, Su-ming Chen, En-bo Cui, Jie Huang, Fang-hong Chen, Tao Li, Fen Qu, and Hui Wang. “Molecular epidemiology of bla OXA-23-producing carbapenem-resistant Acinetobacter baumannii in a single institution over a 65-month period in north China.” BMC infectious diseases 17, no. 1 (2017): 14. 14. Fournier, Pierre Edouard, Hervé Richet, and Robert A. Weinstein. “The epidemiology and control of


REVIEW / THE IMMINENCE OF THE NOSOCOMIAL PATHOGEN A. BAUMANNII Acinetobacter baumannii in health care facilities.” Clinical infectious diseases 42, no. 5 (2006): 692699. 15. Munier, Anne-Lise, Lucie Biard, Clotilde Rousseau, Matthieu Legrand, Matthieu Lafaurie, Alexandra Lomont, Jean-Luc Donay et al. “Incidence, risk factors and outcome of multi-drug resistant Acinetobacter baumannii acquisition during an outbreak in a burns unit.” Journal of Hospital Infection (2017). 16. Berlau, J., H. Aucken, H. Malnick, and T. Pitt. “Distribution of Acinetobacter species on skin of healthy humans.” European Journal of Clinical Microbiology & Infectious Diseases 18, no. 3 (1999): 179-183. 17. Mustikawati, B. Indah, N. Syitharini, S. Widyaningtyastuti, and L. Gunawan. “Implementation of WHO multimodal hand hygiene (HH) improvement strategy to reduce healthcareassociated infections (HAI) and VAP (ventilatorassociated pneumonia) caused by multi-drug resistant Acinetobacter baumanii (MDRAB) at Siloam Hospitals Surabaya (SHBS), Indonesia.” Antimicrobial Resistance and Infection Control 4, no. 1 (2015): O18. 18. Madaan, A., V. Singh, P. Shastri, and C. Sharma. “Comparison of multidrug-resistant Acinetobacter and non-Acinetobacter infections in terms of outcome in critically ill patients.” Critical Care 18, no. 1 (2014): P351. 19. Coyne, Sébastien, Patrice Courvalin, and Bruno Périchon. “Efflux-Mediated Antibiotic Resistance in Acinetobacter.” Antimicrobial Agents and Chemotherapy. December 20, 2010. Accessed July 20, 2017. http://aac.asm.org/content/55/3/947. full. 20. Sun, Jingling, Ziqing Deng, and Alxin Yan. “Bacterial Multidrug Efflux Pumps: Mechanisms, Physiology and Pharmacological Exploitations.” Science Direct. September 2015. Accessed July 17, 2017. http://www.sciencedirect.com/science/article/pii/ S0006291X14009711. 21. Hassan, Karl A., Qi Liu, Peter J.F. Henderson, and Ian T. Paulsen. “Baumannii AceI Transporter Represent a New Family of Bacterial Multidrug Efflux Systems.” MBIO. February 15, 2015. Accessed August 1, 2017. http://mbio.asm.org/ content/6/1/e01982-14.full#ref-1. 22. Leite, Gleice Cristina, Maura S. Oliveira, Laura Viera Perdigão-Neto, Cristiana Kamia Dias Rocha, Thais Guimarães, Camila Rizek, Anna Sara Levin, and Silvia F. Costa. “Antimicrobial Combinations against Pan-Resistant Acinetobacter baumannii Isolates with Different Resistance Mechanisms.” Plos. March 21, 2016. Accessed July 17, 2017. http:// journals.plos.org/plosone/article?id=10.1371/ journal.pone.0151270. 23. Moffatt, Jennifer H., Marina Harper, Paul Harrison, John D.F. Hale, Evgeny Vinogradov, Torsten Seemann, Rebekah Henry, Bethany Crane, Frank St. Michael, Andrew D. Cox, Ben Adler, Roger L. Nation, Jian Li, and John D. Boyce. “Colistin Resistance in Acinetobacter baumannii Is Mediated by Complete Loss of Lipopolysaccharide Production .” Antimicrobial Agents and Chemotherapy. September 10, 2010. Accessed July 10, 2017. http://aac.asm.org/content/54/12/4971.full. 24. Jones, Crystal L., Shweta S. Singh, Yonas Alamneh, Leila G. Casella, Robert K. Ernst, Emil P. Lesho, Paige E. Waterman, and Daniel V. Zurawski. “In

Vivo Fitness Adaptations of Colistin-Resistant Acinetobacter baumannii Isolates to Oxidative Stress.” 25. Trebosc, Vincent, Sarah Gartenmann, Kevin Royett, Pablo Manfredi, Marcus Totzl, Birgit Schellhorn, Michael Pierren, Marcel Tigges, Sergio Lociuro, Peter C. Sennhen, Marc Gitzinger, Dirk Bumann, and Christian Kemmer. “A Novel Genome-Editing Platform for Drug-Resistant Acinetobacter baumannii Reveals an AdeR-Unrelated Tigecycline Resistance Mechanism.” Antimicrobial Agents and Chemotherapy. September 26, 2016. Accessed July 15, 2017. http://aac.asm.org/content/60/12/7263. full. 26. Carter, Andrew P., William M. Clemmons, Ditlev E. Broderson, Robert J. Morgan-Warren, Brian T. Wimberly, and V. Ramakrishna. “Functional Insights from the Structure of the 30S Ribosomal Subunit and its Interactions with Antibiotics.” Nature: International Weekly Journal of Science. August 10, 2000. Accessed July 19, 2017. http:// www.nature.com/nature/journal/v407/n6802/ abs/407340a0.html. 27. Cunha, Burke A. “Pharmacokinetic Considerations Regarding Tigecycline for Multidrug-resistant (MDR) Klebsiella Mneumoniae or MDR Acinetobacter baumannii Urosepsis.” Journal of Clinical Microbiology 47, no. 5 (2009): 1613-1613. 28. Garnacho-Montero, J., C. Ortiz-Leyba, F. J. JimenezJimenez, A. E. Barrero-Almodovar, J. L. GarciaGarmendia, M. Bernabeu-Wittell, S. L. Gallego-Lara, and J. Madrazo-Osuna. “Treatment of MultidrugResistant Acinetobacter baumannii Ventilatorassociated Pneumonia (VAP) with Intravenous Colistin: A Comparison with Imipenem-susceptible VAP.” Clinical Infectious Diseases 36, no. 9 (2003): 1111-1118. 29. Schafer, Andrew I., and Lee Goldman. GoldmanCecil Medicine. Elsevier Health Sciences, 2016. 30. Kang, Anthony D., Kenneth P. Smith, George M. Eliopoulos, Anders H. Berg, Christopher McCoy, and James E. Kirby. “Invitro Apramycin Activity Against Multidrug-resistant Acinetobacter Baumannii and Pseudomonas Aeruginosa.” Diagnostic Microbiology and Infectious Disease 88, no. 2 (2017): 188-191. 31. Matros, Linda, and Terri Wheeler. “Microbiology Guide to Interpreting MIC (Minimum Inhibitory Concentration).” The Vet. February 2001. Accessed August 11, 2017. http://www.the-vet.net/DVMWiz/ Vetlibrary/Lab-%20Microbiology%20Guide%20 to%20Interpreting%20MIC.htm. 32. Seifert, Harald, Danuta Stefanik, Joyce A. Sutcliffe, and Paul G. Higgins. “In-vitro Activity of the Novel Fluorocycline Eravacycline Against Carbapenem Non-susceptible Acinetobacter baumannii.” International Journal of Antimicrobial Agents (2017). 33. Sakoulas, George, Warren Rose, Andrew Berti, Joshua Olson, Jason Munguia, Poochit Nonejuie, Eleanna Sakoulas, Michael J. Rybak, Joseph Pogliano, and Victor Nizet. “Classical β-lactamase Inhibitors Potentiate the Activity of Daptomycin Against Methicillin-resistant Staphylococcus Aureus and Colistin against Acinetobacter baumannii.” Antimicrobial Agents and Chemotherapy 61, no. 2 (2017): e01745-16. 34. Yoon, Eun-Jeong, Vivianne Balloy, Laurence Fiette, Michel Chignard, Patrice Courvalin, and Catherine Grillot-Courvalin. “Contribution of the 2018 I ISSUE 21 I WWW.YSJOURNAL.COM

51


THE IMMINENCE OF THE NOSOCOMIAL PATHOGEN A. BAUMANNII / REVIEW Ade Resistance-Nodulation-Cell Division-Type Efflux Pumps to Fitness and Pathogenesis of Acinetobacter baumannii.” American Society for Microbiology. May 31, 2016. Accessed August 14, 2017. http://mbio.asm.org/content/7/3/e0069716.full. 35. Ghosh, Manuka, Patricia A. Miller, Ute Möllmann, William D. Claypool, Valerie A. Schroeder, William R. Wolter, Mark Suckow, Honglin Yu, Shuang Li, Weiqiang Huang, Jaroslav Zajicek, and Marvin J. Miller. “Targeted Antibiotic Delivery: Selective Siderophore Conjugation with Daptomycin Confers Potent Activity against Multidrug Resistant Acinetobacter baumannii Both in Vitro and in Vivo.” Journal of Medicinal Chemistry. March 13, 2017. Accessed August 15, 2017. http://pubs.acs.org/ doi/full/10.1021/acs.jmedchem.7b00102. 36. Thandar, Mya , Rolf Lood, Benjamin Y. Winer, Douglas R. Deutsch, Chad W. Euler, and Vincent A. Fischetti. “Novel Engineered Peptides of a Phage Lysin as Effective Antimicrobials Against Multidrug-Resistant Acinetobacter baumannii.” Antimicrobial Agents and Chemotherapy. February 8, 2016. Accessed August 16, 2017. http://aac. asm.org/content/60/5/2671.full.

Image Sources

BIOGRAPHY BIOGRAPHY BIOGRAPHY

SEM imagery from the CDC Public Health Image Library (photo credit: Janice Carr). https://phil.cdc.gov/PHIL_Images/20041209/ c9cbf359322b40e08fab8a6129c1be16/6498_lores.jpg

Devansh Kurup, 17, USA

Devansh J. Kurup is a high school senior at Independence High School in Frisco, Texas. He is exploring his scientific curiosity by co-authoring this review article on the nosocomial pathogen Acinetobacter baumannii. He is currently training to become a certified Pharmaceutical Technician in the near future. Moreover, he has logged over 60 hours serving in clinical facilities.

Owais Fazal, 18, USA

Owais Fazal is a Certified Nursing Assistant from Houston, Texas. He currently attends Rice University and is majoring in Social Policy Analysis and Global Health Technologies. He has researched several aspects of the field of public health, including epidemiology, biostatistics, serology, clinical manifestations, and preventive measures for the spread of infectious diseases.

Taha Hassan, 17, USA

Taha Hassan is a senior at Independence High School in Frisco, Texas. He is making his debut into the world of medical journals by co-authoring this one on the nosocomial pathogen Acinetobacter baumannii. He is currently working at the IACC Psychiatric Clinic at his local mosque in Plano, Texas, where is an active nonprofit worker that. He currently looks forward becoming a world class neurosurgeon and help explore the brain.

52

WWW.YSJOURNAL.COM I ISSUE 21 I 2018


RESEARCH / THE EFFECT ON 3D PRINTED PETG THROUGH DIFFERING INFILL

The Effect on Mechanical Performance of 3D Printed Polyethylene Terephthalate Glycol Structures Through Differing Infill Pattern Paul Karavaikin (17) demonstrates the effect that the infill pattern of a 3D printed PETG object has an effect on its tensile performance.

Abstract

In this investigation, the tensile properties of Polyethylene Terephthalate Glycol (PETG) 3D-printed dumbbells with differing internal infill structure were assessed to classify the necessary geometry for the optimisation of the functionality of a part. The test pieces were subjected to increasing uniaxial loads under tension to generate a stress-strain curve, and thereby data for the associated properties, for each specific infill pattern. Hence, a comparison between the strengths of the various patterns is drawn to identify brief criteria for the selection of the optimal infill structure in specific applications. It was found that the zigzag infill pattern could take the greatest load without yielding under the application of a uniaxial tension.

Introduction

3D printing is a technology that is becoming increasingly more used in industry to manufacture functional parts, via an additive layer-by-layer deposition or curing of material[1]. This can be used to produce complex geometries but is limited in speed and mechanical performance. A compromise between speed and tensile properties is to include an infill structure on the interior of the part with a specific geometrical arrangement to improve performance in the desired function. The forthcoming section identifies such properties, discusses the methods for assessing the performance of the part in testing, and explores previous works regarding infill patterns.

Material Characterisation

Stress and Strain Relationship Uniaxial tension testing offers an effective method for the characterisation of the properties of a material when subjected to a uniform load[2]. This allows for the determination of mechanical attributes, including the yield strength, ultimate tensile strength, fracture strength, Young’s modulus (also referred to as modulus of elasticity_ [3] ; these respectively define the load after which a materials transitions from elastic to plastic deformation, the maximal stress achieved during the tension test, the point of material and structural failure[4], and the factor of proportionality during elastic deformation under Hooke’s Law[5]. All of these factors are important considerations in the design of parts in Engineering and commercial products to ensure

the required functionality of that object[6]. A standard geometry is typically applied to the tensile specimen for the purpose of experimentally determining the mechanical properties of a material – a “dog bone” dumbbell shape consisting of a single piece of that material with two shoulders and a thinner section linking them, as per Figure 1 and Figure 2. There are various standardised dumbbell dimensions for measuring tensile performance, with ASTM International offering D638 specifically for the analysis of polymers[7] – D638 Type V is the smallest, most suitable size for the proposed research.Fixing the shoulders of the test piece, the tensile properties can be measured by applying an increasingly large uniaxial tensile load parallel to the length of the part, as per Figure 3. There are two key quantities to determine: stress and strain – both values are normalised with respect to the original dimensions of the test piece. Engineering strain gives the deformation of the material and is defined as:

Where ∆L is the measured displacement and L0 is the original length.

Figure 3: Force diagram for tensile test on dumbbell (CAPINC, 2014)[9]

Figure 1: Top profile of D638 Type V dumbbell (Croop, 2014)[8] Figure 2: Side profile of D638 Type V dumbbell (Croop, 2014)[8] 2018 I ISSUE 21 I WWW.YSJOURNAL.COM

53


THE EFFECT ON 3D PRINTED PETG THROUGH DIFFERING INFILL / RESEARCH Engineering stress is defined as:

Where F is the applied load and A0 is the cross-sectional area of the sample normal to the load in the gauge section.

3D Printing Infills It has been shown experimentally that the ultimate strength of a 3D printed part on average increases with the infill percentage of said piece for a fixed pattern[11]. However, production speeds, part mass, and material costs increase proportionally with the infill percentage, as per Figure 5, rendering it necessary for a compromise to be used.

By recording the strain with increasingly large loads, it is possible to plot a stress strain curve that gives further information about the part:

Figure 5: Graph of variables in 3D printing against infill percentage (3D Matter, 2015)[12] It then becomes necessary to determine the internal arrangement of the structure for the part to optimise performance.

Figure 4: Typical stress-strain curve for a polymer (Leanissen, n.d.)[10] The Young’s modulus for elastic deformation can be determined by taking the gradient of the line between the origin and yield point, the point after which the part does not return to its original dimensions and becomes permanently deformed. The ultimate strength depicted in Figure 4 is the point at which the sample part fractures[6].

Limited research has been carried out on acrylonitrile butadiene styrene (ABS) parts, suggesting that patterns with more of the structure positioned along the same axis as the force are able to undergo a higher stress before fracture[12].

Methods and Procedures

The following apparatus was required for the conduction of the experiment:

Figure 6: Assortment of common infill patterns at various material infill percentages (Hodgson, Ranellucci, & Moe, n.d.)[13] 54

WWW.YSJOURNAL.COM I ISSUE 21 I 2018

•

Fused Deposition Modelling (FDM) 3D printer with PETG filament in order to manufacture the ASTM D638 Type V dumbbells with differing infill structure to be tested.

Figure 7: Bar chart for ultimate strength of ABS dumbbells with differing infill structures (3D Matter, 2015)[12]


RESEARCH / THE EFFECT ON 3D PRINTED PETG THROUGH DIFFERING INFILL •

Universal Testing Machine (UTM) comprised of the following: a load cell for applying a uniaxial force on the specimen; force transducer to measure the applied force; extensometer to measure the extension of the dumbbell; mechanical fixtures to hold the specimen in the machine. Computer to record and store the data outputted from the UTM, and to generate the various infill patterns to be tested.

The specific UTM used for this experiment was setup as shown in Figure 8.

Cura, which is used to generate the desired print settings, including the specific infill pattern from a variety of options as well as other properties such as layer resolution and infill percentage. The only variable changed between the different specimen pieces was infill pattern, with all other factors kept constant to ensure that infill structure was indeed the factor responsible for any variation in results. Layer resolution and outer wall thickness were minimised to give a larger internal volume for the different infills, and the percentage infill was nominally assigned to a value of 40%, as per Figure 11. 5 specimens of each infill pattern (grid, lines, triangle, zig-zag) were printed simultaneously for the purpose of repeating trials and getting multiple datasets. All dumbbells were produced on the same Ultimaker 2 Extended + with 2.85 mm diameter RS Pro PETG filament. Data Collection

Figure 8 – Experimental setup for tensile testing Experimental Procedure Dumbbell Manufacture

With the apparatus positioned as shown in Figure 8, a printed dumbbell could be placed in the machine vice and fastened securely with bolts. Once in place, connecting the force and extension outputs to a computer with PASCO PASPORT digital adapters allows for the calibration of the variables, setting the initial values to 0. The data from the experiment was then be recorded directly to PASCO Capstone as voltages as the machine handle was gently turned, thus applying a load and stretching it. This continued until either the dumbbell broke or the limit of extension within the machine was reached due to the relatively large size of the specimens. After the breaking of a test piece, it was removed and replaced, followed by recalibration of the machine. The data was then reviewed and analysed to calculate the tensile properties of each tested specimen, after which averages and comparisons could be made.

A scale model of the specimen dumbbell was produced in SolidWorks using the previously-discussed ASTM D638 Type V as the basis. The height of the test piece was maximised to 4.40 mm to increase the cross-sectional area to reduce the effect of small manufacturing imperfections on tensile performance. Two holes, 6 mm in diameter, were added to allow the dumbbell to be fixed in the machine with bolts. Thus, the external dimensions of the test specimens were all as follows:

Figure 10: Dumbbells during printing process

Results

Figure 9 – Geometry of ASTM D638 dumbbell specimen The model was then imported into the slicer software

The following results section is structured in a manner that is such that the tested infill patterns are assessed individually before any comparisons are drawn between them. The measurements for force and extension were both recorded as voltages, which, in order to be converted into newtons and millimetres respectively, had to be multiplied 2018 I ISSUE 21 I WWW.YSJOURNAL.COM

55


THE EFFECT ON 3D PRINTED PETG THROUGH DIFFERING INFILL / RESEARCH Figure 11 (Left): Print settings for specimens

Figure 12: Dumbbell in UTM before test by the scale factor listed on the machine. To produce engineering stress values, the forces had to be divided by the initial cross-sectional area of the dumbbell orthogonal to the applied load:

To produce engineering strain values, the extensions had to be divided by the gauge length of the dumbbell:

Figure 13: Dumbbell in UTM after test

analysis would be completed using the range of data rather than percentage uncertainties in the instruments and manufacturing process, as the point of breakage along the dumbbell has a much greater effect on the results; this can be due to a combination of a different amount of polymer at the cross-section due to the infill structure and potential manufacturing imperfections not accounted for in the theoretical axis-resolutions of the 3D printer.

The yield strength in the context of the experiment occurs at the maximal load/stress experienced by a test specimen which provides a valuable reference point for comparison between both repeat dumbbells of the same infill structure and those of differing one:

After calculating the yield point, the corresponding strain value can be located within the data, which then allows for an approximation of the material’s Young’s Modulus (E) to be calculated:

Figure 14: Tested dumbbells used to generate average data for specific infill structures.

This value is conventionally typically given in Gigapascals[14], hence the inclusion of a scale factor in the equation for the conversion.

Grid Infills The internal structure of dumbbells with the grid infill pattern can be seen in Figure 15. Sample data for one such specimen is provided in Figure 16 with a force-extension curve. The maximum force was identified for each specimen tested, and, hence, Table 1 was generated.

With regards to error analysis and the identification of anomalous results, the mean yield strength and mean strain at yield of the 5 samples of a particular infill pattern were calculated. Outliers were considered to be any value 1.5 standard deviations from the calculated mean; if any such that values were present, the means and standard deviations were recalculated these excluding erroneous results. This allows for a more accurate comparison of the different infills being tested. It was decided that the error

The results of Trial 4 can be identified as anamalous due to the fact that the calculated yield strength was more than 1.5 standard deviations above the mean of the five trials. That result could thereby be removed from the calculation for the mean, thus giving a mean yield strength of approximately 50.9 MPa to 3 significant figures. The complimentary mean strain was calculated to be approximately 0.139 to 3 significant figures. These results give the Young’s Modulus to be in the region of 0.366 GPa.

56

WWW.YSJOURNAL.COM I ISSUE 21 I 2018


RESEARCH / THE EFFECT ON 3D PRINTED PETG THROUGH DIFFERING INFILL

(From Left to Right) Figure 15: Dumbbells sliced for printing with grid infill pattern; Figure 17: Lines infill pattern; Figure 19: Triangle infill pattern; 21: Zig-zag infill pattern

Figure 16: Stress-strain curve for grid infill from processed data

Figure 18: Stress-strain curve for lines infill from processed data

Figure 20: Stress-strain curve for triangle infill from processed data

Figure 22: Stress-strain curve for zig-zag infill from processed data

Table 1: Tensile data for grid infill pattern

Table 2: Tensile data for lines infill pattern 2018 I ISSUE 21 I WWW.YSJOURNAL.COM

57


THE EFFECT ON 3D PRINTED PETG THROUGH DIFFERING INFILL / RESEARCH

Table 3: Tensile data for triangle infill pattern

Table 4: Tensile data for zig-zag infill pattern Lines Infills The internal structure of dumbbells with the lines infill pattern can be seen in Figure 17; this varies from the grid structure by having alternating unidirectional beams rather than a flat bidirectional criss-cross. Sample data for one such specimen is provided below with a stress-strain curve in Figure 17. None of the trials lay outside the acceptable range as previously defined, although only 4 trials were used due to a 3D printing malfunction that failed to accurately produce the fifth dumbbell. The results gave a mean yield strength of approximately 53.6 MPa to 3 significant figures. The complimentary mean strain was calculated to be approximately 0.200 to 3 significant figures. These results give the Young’s Modulus to be in the region of 0.267 GPa. Triangle Infills The internal structure of dumbbells with the triangle infill pattern can be seen in Figure 19. Sample data for one such specimen is provided below with a stress-strain curve in Figure 19. Zig-Zag Infills The internal structure of dumbbells with the zig-zag infill pattern can be seen in Figure 21; this varies from the lines structure by having each layer be one continuous line, with each line joined onto the next at either end. Sample data for one such specimen is provided below with a stress-strain curve in Figure 22. The results of Trial 2 can be identified as anamalous due to the fact that the calculated yield strength was more than 1.5 standard deviations below the mean of the five trials. That result could thereby be removed from the calculation for the mean, thus giving a mean yield strength of approximately 59.5 MPa to 3 significant figures. The complimentary mean strain was calculated to be approximately 0.230 to 3 significant figures. These results give the Young’s Modulus to be in the region of 0.259 GPa.

58

WWW.YSJOURNAL.COM I ISSUE 21 I 2018

Comparison of Infill Structures With data available for the mean yield strengths of the various infill structures, it became possible to do a quantitive comparison between them. A histogram showing both yield strength and the modulus of elasticty for each infill pattern was created. The error bars for the moduli were determined using the following formula to find the absolute uncertainties.

This yielded the graph in Figure 23. The graph suggests that, for structural integrity, the zig-zag infill pattern is the best performing under a uniaxial load by maximising the force required to plastically deform the part; the triangular infill pattern appears to be weakest in this regard. For the grid and lines infills, there is a large overlap of the error margins making it difficult to determine which of the two is more suitable for this application. However, the overall ranking of the patterns can be explained with relative ease by considering the continuity of the structures in a single layer. Each zig-zag layer consists of a single pathway travelled by the printer head during manufacture, thus creating an even surface throughout the part; in terms of smoothness, the lines and grid infills follow, with both patterns stopping at either end of each internal diagonal line to reposition the head; the triangle infill is roughest due to the large number of small triangles to be independently made. A rough region can act in the same manner as a crack, enhancing the propagation of stresses through the part, thus decreasing the force required to induce plastic deformation. Theoretically, the modulus of elasticity should be nearconstant across the infill patterns, as this property is intrinsic to the material itself. However, the graph shows large uncertainties in this value, especially for the grid infill pattern. It is possible to again attribute this variation to minor manufacturing imperfections, as the cracks will lead to premature fracture of the test piece in the UTM which can then give skewed gradients for the elastic region of the


RESEARCH / THE EFFECT ON 3D PRINTED PETG THROUGH DIFFERING INFILL

Figure 23: Histogram comparing average yield strength and the measured modulus of elasticity for the tested infill patterns stress-strain curves. Further Avenues of Research Whilst the experiment did yield useful results, the variation seen in results suggests that there could be further quality control and/or procedural mechanisms to increase the accuracy and reliability of the experiment. This could involve testing a larger number of repeat specimens or manufacturing them with a more reliable 3D printer. In fact, the original experimental aim was to complete the tensile testing in situ in a scanning electron microscope, which would allow the surface imperfections of a specific dumbbell to be seen and thereby accounted for in the results during the test; however, this method could not be achieved due to insufficient funding to acquire the necessary apparatus. A more thorough investigation could assess tensile performance of the specimens in multiple axes. This could yield more useful results for real-world applications for the optimisation of performance of parts in 2 or 3 dimensions.

Conclusion

The investigation successfully revealed that the infill pattern of a 3D printed PETG object has an effect on its tensile performance. A continuous structure is optimal for applications where it is necessary for permanent deformation or fracture of the part to be avoided, but simplicity in this geometry is required to maintain fast print speeds and sufficient support for the above layers of the part. Hence, it can be concluded that infill pattern is an important consideration in 3D printing functional parts.

References

BIOGRAPHY

1. Bethany C Gross, J. L. (2014). Evaluation of 3D Printing and Its Potential Impact on Biotechnology and the Chemical Sciences. Michigan: American Chemical Society. 2. Khlystov, N., Lizardo, D., Matsushita, K., & Zheng, J.

Figure 24 – Variation in fracture location of tested dumbbells (2013). Uniaxial Tension and Compression Testing of Materials. 3. Udomphol, T. (n.d.). Laboratory 1: Tensile Testing. 4. NDT Resource Center. (n.d.). Tensile Properties. Retrieved from https://www.nde-ed.org/ Ed u c a t i o n R e s o u rc e s / C o m m u n i t y C o l l e g e / Materials/Mechanical/Tensile.htm 5. Roylance, D. (2008). MECHANICAL PROPERTIES OF MATERIALS. Cambridge, MA. 6. Hosford, W. F. (1992). Overview of Tensile Testing. In Tensile Testing (p. 36). ASM International. 7. ASTM International. (2001). Standard Test Method for Tensile Properties of Plastics. In Annual Book of ASTM Standards, Vol 08.01 (pp. 46-58). West Conshohocken: ASTM International. 8. Croop, B. (2014, July 07). ASTM D638 TYPE V. Retrieved from https://www.datapointlabs.com/ images/Specimens/ASTM_D638_TypeV.pdf 9. https://www.capinc.com/2014/02/12/frequentlyasked-questions-on-von-mises-stress-explained 10. Leanissen, M. (n.d.). Biological Engineering Biomaterials. Retrieved from http://www.soe. uoguelph.ca/webfiles/kgordon/Academic%20 Courses/Bone_for_Biomaterials.htm 11. Alvarez, K., Lagos, R., & Aizpun, M. (2016, December). Investigating the influence of infill percentage on the mechanical properties of fused deposition modelled ABS parts. Ingeniería e Inestigacíon, pp. 110-116. 12. 3D Matter. (2015). What is the influence of infill %, layer height and infill pattern on my 3D prints? Retrieved from http://my3dmatter.com/influenceinfill-layer-height-pattern/ 13. Hodgson, G., Ranellucci, A., & Moe, J. (n.d.). Infill Patterns and Density. Retrieved from http:// manual.slic3r.org/expert-mode/infill 14. Elert, G. (n.d.). Elasticity. Retrieved from The Physics Hypertextbook: http://physics.info/elasticity/

Paul Karavaikin, 17, UK

Paul is passionate about all things STEM, believing it to be key to making a positive, lasting impact on society. He is aspiring to study engineering at university and can be found tinkering with everything from 3D printers to scanning electron microscopes.

2018 I ISSUE 21 I WWW.YSJOURNAL.COM

59


SUSPENDING A DIPOLE RADAR SCANNER FROM A HELICOPTER / RESEARCH

Suspending a Dipole Radar Scanner From a Helicopter – Improving Methods of Evaluating Glacial Water Resources

Amy Mackie (15) reports on a study focusing on the feasibility of suspending a dipole radar from a helicopter.

Abstract

Glaciers cover about 10% of the Earth’s land and water from melted glaciers is important for local people to drink and to irrigate their land, as well as for hydroelectric power. Understanding the thickness and change in glaciers is essential for understanding and managing risks related to glaciers (e.g. floods and avalanches, which can have a massive impact on local areas and people) and the amount of meltwater available from glaciers, which is necessary for reducing water stress in drought-prone areas. Change in glaciers is also a useful measure of climate change. Both climate change and the increased expectation of food and water shortage as a result of droughts are important ecological and social problems. Improved knowledge of the volume of glaciers can contribute to a better and quicker response to potential problems. As glaciers are in some of the least accessible areas of the world, scanning glaciers is not without problems and scientists and engineers are currently considering the best methods of scanning and how scanners may be transported. The study reported here focuses on the feasibility of suspending a dipole radar from a helicopter.

Introduction

Scanning glaciers in mountainous areas is problematic because of the terrain, temperature and accessibility. The project reported here is part of a research study led by Dr Hamish Pritchard of the British Antarctic Survey (BAS), and involves a series of tests to establish how a dipole radar to scan glaciers, could be suspended from a helicopter, with a view to improving methods of measuring glaciers and to identifying practical difficulties. This project, undertaken by Allan McRobie, focuses specifically on structural aspects 60

WWW.YSJOURNAL.COM I ISSUE 21 I 2018

related to suspending the dipole radar scanner and is part of a larger project involving structural engineers, aerodynamic engineers, BAS and material scientists. Involvement in the project looking at structural factors formed part of my work experience placement. The significance of the project is that it can contribute to more reliable, effective and efficient scanning of glaciers, increasing our knowledge of their structure, changes in structure, mass and composition and consequently, the


RESEARCH / SUSPENDING A DIPOLE RADAR SCANNER FROM A HELICOPTER effects of climate change. This will increase our knowledge and ability to respond to the advantages and risks of glacial melt.

Literature Review

Glaciers are formed (sometimes over thousands of years) when fallen snow does not melt away, but remains in one place and is compressed over years and becomes ice. Glaciers are found predominantly in the Arctic and Antarctic, but cover around 10% of land on the Earth, mostly in mountainous areas, with the highest concentration of these in Asia[1]. They can be mountain glaciers, flowing down mountainsides or continental glaciers that cover huge areas of land. Glaciers are sensitive to temperature, allowing snow to last or snow and ice to melt. They can advance downhill when they reach a certain mass, or decrease in size when melting is greater than snowfall[2]. Temperature affects how glaciers grow, reduce or move and so glaciers provide a useful measure of climate change over time. Different parts of a glacier respond differently to climate changes and so it is important to understand their structure to know more about climate change. Glacial melt or meltwater is the water produced from glaciers as the temperature rises. It provides drinking water for people living near mountains and irrigates the land. Glacial melt can also provide hydroelectric power. Pritchard[1] reports that Asian glaciers provide water to meet the needs of 136 million people, which can be maintained through droughts. For those living near glaciers, there is the danger of glacial lake outburst floods (GLOFs) when glacial melt causes the amount of water in lakes at the top of the glacier to be too great for the debris that creates a natural dam (a moraine dam) that holds the lake in place[3]. GLOFs can destroy villages. Avalanches also pose threats to people living near glaciers and these are linked to changes in glaciation. Understanding the structure of glaciers and changes over time is important for managing risks of floods and knowing when water supply may be short. Although global warming has caused increased melting of glaciers in many areas, not all glaciers are reducing and it is important to know why this is. Raina and Srivastava cited by Reynolds[4] have reported information on 9,575 glaciers in India. The Karakoram and the Himalayan ranges are of particular interest. The Karakoram range crosses Pakistan, India and China, and, other than the polar regions, is the most glaciated area. Debris seems to be protecting the Karakoram from the Sun’s heat and Karakoram glaciers are not reducing to the same extent as Himalayan glaciers. In measuring glaciers, Reynolds[4] indicates that size, shape, height, depth and volume are important, as well as other details, such as snow and debris (which shield the snow and can dam lakes) and the effects of tributaries to a main glacier. Structure (thickness and change in thickness) is important as it gives information on climate change, meltwater and flow that impacts surrounding areas. The change of glaciers is difficult to measure because of limits to technology, although these have been reduced over time, Reynolds[4]. Measurements are also difficult due to the terrain, which is mountainous and hard to access. For structural information, surveys have, until quite recently,

been limited to photographs from helicopters. In earlier surveys, Kennett, Laumann and Lund[5] describe the scanning of glaciers in temperate regions from the ground, using snow-scooters to move the equipment. To overcome the difficulties of steep slopes, as with Norway’s glaciers, they considered scanning from the air but recognised the difficulties of managing antennae measuring 15m in length in mountainous regions. They reported scanning from fixed-wing aircraft since 1981[6], but commented on the problems of these in navigating narrow gaps and flying through regions with mountains. Kennett et al.[5] write of the use of a helicopter with suspended antennae, consisting of dipoles, from the main hook 20m below the body of the helicopter, with the receiver linking a digital oscilloscope to a portable computer. They found the signal to be both reliable and strong, scanning up to 350m of ice in a temperate region. This seemed to be a reasonable strategy and one that has been built on in this study. Reynolds[4] reports imaging using radio waves through ice sheets since the 1950s, based on the principle of echo-sounding, where radio waves from a transmitter are reflected back from structures, in this case below the ice, to a receiver, which then allows imaging. The ice itself is quite transparent to radio waves, but water and land absorb and reflect radio waves differently. Radio-Echo sounding (RES) can be used for distances up to 5km. Since the 1970s, Ground Penetrating Radar (GPR) has been used with frequencies in the range 50-700 MHz and there has been a lot of development of this since the 1990s. RES tended to be used from the air because of the extent of the area to be covered and their inaccessibility and GPR from the ground or water. Reynolds[4] indicates that satellites are also used to assess changes in water volume via Synthetic Aperture Radar (SAR) which detects features of water different to ice and snow and can measure surface height. He also suggests that using a helicopter allows GPR antennae to be attached under the fuselage. Current developments in scanning glaciers focus on the use of GPR on helicopters because of the helicopter’s manoeuvrability in mountain ranges, and because the use of GPR antennae have made attaching radar to helicopters easier. McCarthy, Pritchard, Willis and King[7] have shown that GPR is reliable, effective and efficient in measuring glacial debris thickness in comparison with invasive methods such as digging pits or measuring the thickness of debris on ice cliffs. However, they note the difficulties in mountain areas.

Method

A structure to hold a dipole radar underneath a helicopter based on the concept of a bridge was modelled and then tested at full size. The initial design consisted of two, 20-metre, lightweight, telescopic, tubular poles at the base, which had a tapered shape, with the dipole radar going through the poles. The poles were made out of GFRP (Glass Fibre Reinforced Plastic) not CFRP (Carbon Fibre Reinforced Polymer) because GFRP is light weight (and so can be carried by 2018 I ISSUE 21 I WWW.YSJOURNAL.COM

61


SUSPENDING A DIPOLE RADAR SCANNER FROM A HELICOPTER / RESEARCH

Figures 1 (Above) and 2 (Left): Photos taken during Experiment 4. Some lines have been extended to illustrate some details outside of the camera’s field of view.

the helicopter) and CFRP is conductive and would have interfered with the radar’s signal and therefore the radar would have been ineffective. The poles needed to have a great length so that the dipole radar could be longer to transmit lower frequency radio waves which would be absorbed less by the ice and therefore a better signal would be received. These two poles were connected with a coupler and at the end of the poles an arrow shape was needed in order to reduce rotational vibration. String or wire was attached at different points on the poles and met at 10 metres from the pole. A carabiner clip had the string/wire attached to it and was connected to a cable which could hang from the helicopter. In order to test that the structure holding the dipole radar would be effective, four experiments were carried out to determine any changes that needed to be made to the initial design. Experiment 1 A very small model of the structure was made using a quad-copter, two much smaller poles (attached with white tape) and fishing line. Experiment 2 Improvements were made to the first experiment in order to reduce the pendulum motion of the structure, which involved attaching a pole to the quad-copter and attaching the structure to the pole with extra string. Experiment 3 To try to reduce the amount of resources needed to make the structure, only one fishing line was used. This was a 62

WWW.YSJOURNAL.COM I ISSUE 21 I 2018

fifth scale model using 4m poles. Experiment 4 The next experiment to try was the full-scale model. In place of the lightweight, telescopic, tubular poles, masts designed for use as vertical antennae supports, were used. Before carrying out this experiment, a risk assessment was done and measurements were taken in order to do calculations to see whether the model would buckle. Unfortunately, the outcome of the calculations showed that the model would most likely buckle. To prevent the model from buckling the height was increased from 10 metres to 14 metres in order to reduce the compressions that would contribute to the buckling of the poles. Instead of a helicopter, a cherry picker was used to lift the structure and an aluminium coupler was used to connect the two poles.

Results

Experiment 1 This experiment concluded that the structure was effective, however the structure moved in a pendulum motion, which needed to be reduced. Experiment 2 Attaching a pole to the helicopter would not be feasible as there would only be one attachment point on the helicopter, so the other pole could not be attached to it. A bundle of three poles might be used to add extra weight, making the pendulum motion less easy to excite, and this would also add stiffness to help inhibit any potential for buckling and


RESEARCH / SUSPENDING A DIPOLE RADAR SCANNER FROM A HELICOPTER reduce the amplitude of any vibrational modes. Experiment 3 The two, 4 metre, poles self-buckled, which would not be appropriate for the structure for the dipole radar. Experiment 4 The poles did not buckle and the structure was successful.

Conclusion and Discussion

Overall, the bridge-like structure to suspend a dipole radar in a pole casing was successful, suggesting that this may be a good way to attach a radar to a helicopter to scan glaciers. However, the pendulum motions may need to be reduced by, for example, adding weight before the dipole radar can be used suspended from a helicopter in this way, and consideration will need to be given to the properties of the materials used under extreme cold temperatures.

Lund, C. op cit 7. McCarthy, M. Pritchard, H. Willis, I and King, E. Ground-penetrating radar measurements of debris thickness on Lirung Glacier, Nepal, in Journal of Glaciology, 2017, 63, (239), 543-555

Image Credits

Image at the beginning of the aritlce: Guilhem Vellut from Paris (Glacier) [CC BY-SA 2.0 (https://creativecommons. org/licenses/by-sa/2.0)], via Wikimedia Commons

The tests highlight possible ways forward in improving scanning of the structure of glaciers, with a positive impact on our understanding of global warming, climate change, meltwater and other aspects of the effects of glaciers on ourselves and our environment.

References

BIOGRAPHY

1. Pritchard, H.D. Asia’s glaciers are a regionally important buffer against drought, Nature, 545, 2017, 169-174 2. Hays, J, Glaciers: Their Mechanics, Structure and Vocabulary, Facts and Details ,last updated January 2012, accessed 27 July 2017 http:// factsanddetails.com/world/cat51/sub323/ item1314.html 3. IRIN, The Inside Story on Emergencies, Himalayan glaciers melting more rapidly, last updated 20 July 2012, accessed 27 July 2017 http://www.irinnews. org/report/95917/climate-change-himalayanglaciers-melting-more-rapidly 4. Reynolds, J.M. Ground Penetrating radar surveys for detailed glaciological investigations in the Polar and Himalayan regions. In: Ramesh, R., Sudhaker, M and Chattopadhyay, S. (eds). Scientific and geopolitical interests in Arctic and Antarctic. Proceedings of International Conference on Science and Geopolitics of Arctic and Antarctic, (iSaGAA), March 2013, LIGHTS Research Foundation, 296:273-288. 5. Kennett, M., Laumann, T. and Lund, C. Helicopterborne radio-echo sounding of Svartisen, Norway Annals of Glaciology, 1993 17, 23-26 6. Watts, R.D. and Wright , D.L. Systems for measuring thickness of temperate and polar ice from the ground or form the air. Journal of Glaciology, 27 (97), 1991, 459-469 in Kennett, M., Laumann, T. and

Amy Mackie, 15, UK

Amy Mackie is a 15 year old student in Year 11 at Bolton School. Amy hopes to study Engineering at University and is very grateful for the amazing opportunity to undertake this work experience placement with Allan McRobie, at the University of Cambridge, Department of Engineering.

2018 I ISSUE 21 I WWW.YSJOURNAL.COM

63


The Young Scientists Journal is an international peer-review science journal written, reviewed and produced by school students aged 12 to 20. Founded in 2006 by Christina Astin and Professor Ghazwan Butrous, it has connected students from over 50 countries and have been the vehicle of choice for many in getting their work published. It is the oldest and largest organisation in the known universe of its kind.

Ambassadors

Our ambassadors are experts who advise and evangelise the journal. Team Leader: Dr. Dawn Leslie, UK Email: dawn.leslie@ysjournal. com Team Members: Christina Astin, UK Anna Grigoryan, Armenia Thijs Kouwenhoven, China Djuke Veldhuis, Denmark Ahmed Naguib, Egypt Niek d’Hondt, Holland Andreia Alvarez Soares, Switzerland Ajay Sharman, UK Alom Shaha, UK Charlie Barclay, UK Christina Astin, UK Claire McNulty, UK Courtney Williams, UK Jess Wade, UK Kate Barwell, UK Katherine Mathieson, UK Katie Haylor, UK Lara Compston-Garnett, UK Liz Swinbank, UK Malcolm Morgan, UK Marc Tillotson, China Mark Orders, UK Martin Coath, UK Martyn Poliakoff, UK Mat Hickman, UK Meriame Berboucha, UK Patricia Emlyn-Williams, UK Heather Williams, UK Stefan Janusz, UK Laura Kendrick, UK/Australia

Peter Hatfield, UK/South Africa Joanne Manaster, USA Muna Oli, USA Paul Soderberg, USA Alan Sheridan, UK Jonathan Rogers, UK Jonathan Butterworth, UK Prof Clive Coen, UK Sophie Brown, UK Mei Yin Wong, UK/Singapore Steven Simpson, UK Jim Al-Khalili, UK Lisa Murphy, Ireland Jeremy Thomas, UK Becky Lowton, UK Sarah Bartlett, UK Maria Courel, UK Beverley Wilson-Smith, UK Mary Brady, USA Ajay Sharman, UK Phil Reeves, UK Tony Grady, USA Vince Bennett, USA Armen Soghoyan, Armenia Lee Riley, USA Lorna Quandt, USA Mike Bennett, USA Otana Jakpor, USA Pamela Barraza Flores, USA Corky Valenti, USA Steven Chambers, UK Sam Morris, UK Dawn Johnson, UK Debbie Nsefik Nsefik, UK Don Eliseo Lucero-Prisno III, UK Baroness Susan Greenfield, UK Joanna Buckley, UK Tobias Nørbo, Denmark Arjen Dijksman, France Ian Yorston, UK

Young Scientists Journal is proudly published by the Butrous Foundation in Canterbury, UK ISSN 0974-6102 (Print) ISSN 0975-2145 (Online)

Printed By:

All rights reserved. No part of this publication may be reproduced, or transmitted, in any form or by any means, electronic or mechanical, including photocopy, recording, or any information storage and retrieval system, without permission in writing from the editor. The Young Scientists Journal and/or its published cannot be held responsible for errors or for any consequences arising from the use of the information contained in this journal. The appearance of advertising or product information in the various sections in the journal does not constitute and endorsement or approval by the journal and/or its publisher of the quality or value of the said product or of claims made for it by its manufacturer, advertisers have no prior knowlege of content. The journal is printed on acid free paper.

Imagine a science education where school students help leading genomics researchers in the fight against tropical diseases. You need imagine no longer because this is a project run by the Institute for Research in Schools (IRIS). Find out more at: www.researchinschools.org

ADVERTISEMENT


A VIEW FROM MATHEMATICAL BRIDGE, QUEENS’ COLLEGE, CAMBRIDGE AT THE FOURTH ANNUAL YSJ CONFERENCE

LEARN. NETWORK. BE INSPIRED.

CHECK OUT EVENTS.YSJOURNAL.COM FOR UPCOMING CONFERENCES AND EVENTS.

INSPIRING AND NURTURING THE SCIENTISTS OF THE FUTURE THE WORLD’S PEER REVIEW SCIENCE JOURNAL WRITTEN & EDITED BY 12-20 YEAR OLDS

/YSJournal

@YSJournal

Publisher:

@YSJournal Supported By:

Partners:

ysjournal.com


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.