Scientific Harrovian Issue III 2018
THE POSSIBILITY OF A SPACE JUNK DISASTER
DENTAL CARIES Exploring the origins of
GOLD CRYONICS
Azo dyes
Literature Study Lifestyle and cardiovascular disease
Experimental research To investigate the motion of a rattleback
Literature Study The future of fecal microbiota transplantation
Message from the Staff Editor-in-Chief
Message from the Staff Editor-in-Chief Dear Readers, It is hard to follow in the footsteps of one of the legends of Harrow International School Hong Kong, but I was honoured to be asked to take on the position as Editor-in-Chief this year, and here we are with the third edition of this wonderful publication. The articles in this year’s edition, is a compilation of rigorous scientific investigations as well as scientific interest in the enigma of science phenomena that has caught the authors’ imagination. The Scientific Harrovian is a dream that came true for my predecessor, Dr Daniel, and for me, a way to inculcate scientific curiosity among our students. The number of articles submitted, as well as the quality, were astonishing. This year there are articles submitted by students as young as year 6. Two excellent Science EPQs are also published, and I had the privilege of leading both. I would like to thank the Science Department, firstly, for providing the budget for the publication. Secondly, to all the Editors for all their relentless work, as well as the design team that produced this marvelous cover and templates. Lastly, to the authors for their undoubtfully conducive contributions. It takes a team to row the boat across the waters! I hope you will enjoy the interesting, fascinating and very factual articles in this, the third edition, of the journal. Yours faithfully,
Mrs Lotje Smith
Introduction Defining the Scientific Harrovian The Scientific Harrovian is the Science Faculty magazine, which allows scientific articles of high standard to be published annually. In addition, the Scientific Harrovian is a platform for students to showcase application of scientific methodology, where more experienced students will guide authors and help them develop skills to prepare for life in higher education and beyond.
Guidelines for all articles All articles must be your own work and must not contain academic falsities. The articles must be factually correct to the best of your knowledge. The article must be concise, with good English, and it must be structured appropriately. Any claim that is not your personal findings must be referenced.
Guidelines for scientific research articles Scientific research articles must pose a question which is motivated by observations or research of secondary sources. Articles must include a hypothesis which is justified by experimentation. The authors must make a detailed analysis of the experiment results and reach a conclusion with an evaluation of validity. In addition, articles should be written in a formal tone in the third person. 75% of all scientific research articles must be on natural science topics, defined to include Physics, Chemistry, Biology, Earth Sciences and Material Sciences. However, other articles that meet the above criteria can also be considered for submission.
Guidelines for general science articles Writers who are not planning to carry out their own experiments can opt to introduce a science topic to a general audience with no technical background, where ‘science’ is broadly interpreted. The writer could start from a point of interest and research widely, ultimately presenting an article well-supported by a variety of sources. Writers should feel free to supply high-quality royalty-free and copyright-free images.
Guidelines for theme articles This year, the editors decided to ask a theme question, which is “What does Science mean to you?” We recommend editors every year to ask a theme question, which could be answered from a variety of perspectives, as exemplified by this year’s entries.
Copyright notice Copyright © 2018 by The Scientific Harrovian. All rights reserved. No part of this book or any portion thereof may be reproduced or used in any manner whatsoever without the express written permission of the publisher except for the use of brief quotations in a book review.
Joining the Editorial Team Will you commit to mentoring budding science writers? Do you have computer graphics design skills? Are you familiar with Adobe InDesign? Our team may have just the spot for you. Email scientific-harrovian@harrowschool.hk and the current Staff Editor-in-Chief to apply for a position.
The Editorial Team Y13s
Greg Chu Student Editor-in-Chief
Courtney Lam Editor
Y12s
Michelle Zhang Chief Graphics Designer
Benjamin Wang Editor and Layout Designer
Tracy Chen Front Cover Designer
Justin Kwok Editor
Karen Wong Editor
Juliana Tavera Back Cover and Washes Designer
Jeremy Wu Editor
Glory Kuk Section Covers Designer
Emily Tang Editor
Easter Chan Editor
Hoi Lam Wong Editor
Contents What does science Mean to You? Julien Levieux
2
Alex Wu
3
Caden Wong
4
Jett Li
5
application of science
crYonics: can We Bring Back the dead?
8
Alissa Lui
dĂŠjĂ Vu
11
Elizabeth Lou
exploring the origins of gold
14
Callum Sharma
lifestYle and cardioVascular disease
20
Yin En Tan
the future for fecal MicroBiota transplantation
23
Hoi Lam Wong
the possiBilitY of a space junk disaster
26
Gisele Lajeunesse, Jeremy Wu
experiMental research
failed experiMent: lessons learned
36
Julien Levieux
inVestigation on the effects of resonance using penduluMs
41
Greg Chu
to inVestigate the rotational Motion of a rattleBack
52
Greg Chu, Courtney Lam
a reVieW of use of fluoride on dental caries preVention
69
Justin Kwok
enVironMentallY-friendlY Methods of sYnthesizing azo dYes Greg Chu
88
What does Science Mean to You?
What does Science Mean to You? Julien Levieux (Y10) If we were to delve deeper to find the answer to these two questions we would be entering an academic discipline known as philosophy. We search for the answer by finding ways of justifying our perception, and then we engage in debate with others to try to evaluate our respective beliefs, and determine the most valid answer. Such debates seldom end, as few answers are acceptable to all. There is, however, a more modern form of philosophy that has allowed us to answer our questions more easily. Science. It is a type of philosophy that has played an increasing part in our lives, if it has not replaced completely the role of classical philosophy. Science relies on observation. When we have questions regarding a phenomenon in the universe, we observe the phenomenon that we do not quite understand, and then we come up with a hypothesis. Our hypothesis is then tested and evidence is gathered. If the evidence supports the guess, we can say that the hypothesis is true—for now. With the support of conclusive evidence, it is much harder for anyone to argue against specific theories. So, ever since the scientific revolution in the 16th century, we have been able to state, with more and more certainty, that we have a good understanding of the physical sciences. Yet there are limits to our observations scientific approaches. Should we try to answer the first question that I started with? What makes me who I am? Science can perhaps say that your body is made up of 65% oxygen, 18.5% carbon, 9.5% hydrogen and so forth. Perhaps it can list your organs, or describe metabolic reactions that occur within. Your attitude, emotions, spirit… are things that we still have to constantly ponder about. Sometimes, we may have revert back to classical philosophy. These limitations to science have led many to see it as an unglamorous, cold-blooded and hardboiled academic discipline. It isn’t, because we cannot understand everything, no matter which approach is used. If you were to talk to a theoretical physicist about ‘time’, you would be surprised by some of the theories he may bring up. In fact, some may sound so far-fetched that they do not even sound ‘scientific’. But it is also because of these knowledge gaps that we constantly continue to search for answers through science. And the concept of using evidence to justify hypotheses has applicability outside of science. We use evidence to justify our beliefs in Literature, Geography, History, Maths, Economics…the scientific method is no different to many other subjects. Science has been able to provide answers to natural phenomena better than any other academic discipline so far, but it has limits and it cannot answer anything. Nevertheless, it has led us to where we are today. Until we can find a better replacement for science, we should be grateful for what it is—and what it has done for us. So what does science mean to me? This is a really difficult question, considering the fact that science cannot answer this. The Oxford dictionary[1] defines it as “the intellectual and practical activity 2
Theme Question encompassing the systematic study of the structure and behaviour of the physical and natural world through observation and experiment”. In simpler terms, I think science is all about making the unknown seem more relatable. To be able to do that is a significant achievement in itself.
References 1.
https://en.oxforddictionaries.com/definition/science
What does Science Mean to You? Alex Wu (Y11) What is science? How much more do we need to discover? Science is always a really interesting topic to talk about, no matter where or who you are. There are a lot of questions surrounding science that we probably won’t find out for ages. I have been to hundreds of museums, and completed a lot of events and projects related to science. As a student, I always feel science is a mysterious yet interesting topic that will greatly influence me in the future. When I was a little boy, my parents took me to a lot of science museums. My understanding of science was that everything was huge. Dinosaur bones, large machines and engines that I have no idea what their uses were at that time, thinking everything in the museum was cool. I even imagined myself as one of the metal balls, travelling down some mysterious tubes and would have no idea where I’d end up. After I started studying science at school, I realised that science is not only about big metal chunks, it comes in all shapes and sizes. In Chemistry, I learnt about different elements of the periodic table and how they will react with each other. In Biology, I learnt that everything is not what as it seems, and there are millions of species that cannot be seen with the naked eye. In Physics, I learnt about the different types of forces, and how different items like loudspeakers and robots function. In fact, I was so interested in Physics that I enrolled myself in an underwater robotics club. When building my underwater robot, I learnt that building a robot requires way more skills than building with Legos, and one small mistake can lead to a huge malfunction. Through a lot of setbacks, my team and I still managed to participate in the underwater robotics competition, and managed to achieve a high result in it. Of course, I believe that science’s influence on me will not end at school, but it will also help me greatly in my future. If I want to be a rocket scientist, then science is needed to model the aerodynamics, fuel consumption, among other countless facets of a mission. If I want to become a computer engineer, I will have to design and research new computer components. I plan to study engineering at university, and engineering is greatly influenced by science: Physics is needed to ensure that all of the components are constructed properly and without any flaws, and Chemistry is needed to find the best material to build the product with. Outside of work, I can use my knowledge of science to try and make my life easier by buying the most effective products and fulfill my everyday needs, like cooking or washing my clothes. There is no denying the fact that science is all around us. I have been interested in the world of science since I was a little boy, and although the meaning of science has changed a lot over the years for me, there is one thing that never changed: science is a really mysterious thing. My knowledge of science is really shallow and I am eager to continue my scientific explorations here at Harrow. 3
What does Science Mean to You?
What does Science Mean to You? Caden Wong (Y6) Science is everywhere. When I look around anytime, anywhere, I know I would be surrounded by modern architectures, state-of-the-art technologies, and some of the greatest inventions of the 20th century. I feel very lucky that I get to witness the success of scientific inventions that improved billions of lives and more to come. Without these inventions, life would be inconvenient. How did science improve our lives? People in ancient times had to do lots of farming manually or with the help of animals; nowadays we have various machines to do the job. In order to get around in the past, you had to walk or travel by boat. But now travelling is so easy as there are many options for you: cars, trains, planes, buses, and so on. In the old times, when you become ill, you had to look for plants for treatment because there were no antibiotics, and people died from minor infections that were fatal at the time: smallpox, scarlet fever, to name just a few. Now we have antibiotics and vaccines that work miracles. Without electricity, there was no modern lighting to illuminate the medieval nights, so people had to burn candles instead, and this had made it very difficult for them to work or study during the evening, or come home late at night. Fast forward to the present day, we have modern lights in every room and every street! Why am I so passionate about science? It is because science has improved my life ever since I was born. The iPhone was invented in 2007, the month before I was born, and now I have an iPhone 7 which is indispensable. Before I had a phone, I had a camera. It records every moments of my life, from a baby, to a toddler, to a young boy. Now I am old enough to do almost all my studies using my computer, the internet, and even my phone. Tasks such as researching, looking words up and preparing presentations are made much easier with the help of technologies. We have all experienced the advantages of apps these days. Powerful apps such as Wechat not only can help you to send messages, but it can also help you to get taxis, transfer money, order take-outs and so on. When you are travelling, you could use google maps to help you navigate, look for restaurant reviews and get directions to your hotel. I have tried it and it works very successfully. Just make sure your phone has battery charge and internet connection! Not only can science improve our lives, it can also save our lives. When we are sick, we go to the doctor for treatment. We can also get vaccinations to prevent ourselves from catching diseases such as poliomyelitis, tetanus and measles. Air pollution is another modern issue that we are deeply concerned about. According to an estimation by the World Health Organisation (WHO) in 2012, ischaemic heart disease and strokes induced or exacerbated by outdoor air pollution accounted for about 72% of premature deaths. Other air-pollution related diseases such as chronic obstructive pulmonary disease or acute lower respiratory infections accounted for 4% of deaths, and 14% of deaths were due to lung cancer caused by air pollution rather than smoking.[1] Because of that, we have invented various air purifiers to filter out the dusts and harmful particles in the air. This has greatly improved our health over the last decade. Other scientists are working hard to develop cleaner alternative energy sources to replace fossil fuels, which are the main sources of air pollutants. In the face of major pandemics, we have groups of scientists working together to create a vaccine, so there will not be large casualties, crimes, and global crises. Examples include the Ebola outbreak in 2014 that started in West Africa which is now contained. Technologies can also help solve crimes. Scientists use forensics to catch criminals with the help of fingerprints and DNA tests. Even ordinary 4
Theme Question citizens could use fingerprints when they go through customs, and this has made our lives easier. Science means a lot to me. In most people’s eyes, science is an invention, the latest technology, or simply a solution to a problem, but to me, science is a passion. It is my favourite subject in school because it is so interesting to learn. My favourite part of science is chemistry and learning about molecules. I also like doing experiments and trying new ones. Without science, we wouldn’t have most of the things we have today, such as computers, or even know most of the things we know today.
References 1.
http://www.who.int/mediacentre/factsheets/fs313/en/
What does Science Mean to You? Jett Li (Y10) Science have become integral to our everyday lives. Everywhere we go we hear of new breakthroughs and new discoveries. Most of the time I honestly do not know what’s being discovered or what the breakthrough is, but my ignorance goes to show how far we have come in such a short span of time. A century ago, any real scientific breakthrough would be revolutionary. It would change society for the better and push us in a new, more appealing direction, and everyone in the developed world would know about it. Nowadays it takes far more than just one discovery to shake the foundations of our world. However it’s only a matter of time before a new world-changing breakthrough comes along. At the rate we’re progressing, we can expect to see another discovery on the level of Newton’s three laws or Einstein’s theory of relativity soon. This can, and will, change everything yet again. Science can be seen as a measurement of humanity’s progression. We started at the very bottom, with no knowledge of anything about ourselves or the world. Through the thinkers and ‘heretics’ throughout the course of human history, we have steadily made our way down the path of development and scientific discovery. Now, that pace has quickened to a full sprint, and no one is looking back. As John F. Kennedy famously said in his address at Rice University, “No man can fully grasp how far and how fast we’ve come.” In his speech, Kennedy also notes how much our ignorance increases with our knowledge. The more we learn, the more we don’t know. Science is an infinitely expanding field of work, it can never truly die away. Science is the best measure of human achievement and knowledge. As we progress, our understanding of science deepens and we make more breakthroughs quickly. However these breakthroughs only lead to more questions and a need for more answers, thus making it truly impossible to completely learn all there is to know about the universe. Science is here to provide us with questions. It gives humanity a purpose, it gives us something that stimulates our intellectual curiosity which is present in each and every one of us. To me, science is something that will always allow humanity to progress and prevents us from falling into stagnation. We will continue to improve and develop as a species under science’s stimulation. That is its purpose. That is what it has done and what it will do until we either choose to stop or cease to exist. 5
6
Application of Science
Cryonics: Can we Bring Back the Dead? Alissa Lui (Y11)
An Overview Derived from the Greek word “kruos”, meaning “cold”, cryonics is the preserving of people (who currently cannot be cured) in sub-zero temperatures, in hopes that they can be revived in the future with the help of advanced medical science. The extremely low temperature halts all chemical reactions in the body so that information in the brain can be retained. As of 2013, 270 people have undergone cryonic suspension. Today, thousands of patients are waiting for cryonic suspension.
How it Works 1. The procedure can only be carried out when a person has been pronounced legally dead. This is the short time period before brain death when heart beating stops but brain functioning still exists. Oxygen and blood are then supplied to the brain to ensure that slight function still remains. 2. Heparin, an anticoagulant (also known as a blood thinner), is injected into blood vessels. Since the body will then be covered in ice, Heparin is needed to prevent blood clotting. 3. Water is removed from the body to prevent freezing in cells. When cells freeze, ice crystals will form and expand, puncturing cell membranes and causing damage. The water is replaced with cryoprotectant, an anti-freezing chemical that protects tissue from damage. This process is known as vitrification. 4. The body is cooled to -130°C with dry ice. 5. The body is stored head-down in a metal tank filled with liquid nitrogen at -196°C, to ensure that the brain is immersed in the liquid if a leakage occurs. The body is now in cryonic suspension.
A Cryobiologist’s Obstacle Unfortunately, no human has been released from cryonic suspension successfully, and the only way to do so would be through the thawing process. Although cold blooded animals, like wood frogs, have the capability of freeze-thawing in cold climates, the thawing process causes significant damage to humans. 8
Cryonics: Can we Bring Back the Dead?
At -196°C, the human body is extremely brittle. According to cryobiologist Dr Dayong Gao from the University of Washington, Seattle, “The body could easily fracture like glass during warming due to thermal stress.” Water contracts at roughly 4°C, but expands by 9% when the freezing point is surpassed. Additionally the brain, with an approximate of 10,000 connections among all 100 billion neurones, is especially sensitive to temperatures. Caption: Embryos can now be cryopreserved for in-vitro fertilisation, along with complex organs like kidneys, livers, and intestines.
The Good, the Bad, and the Deadly The Great Barrier Reef is expected to die before 2050, and cryopreservation is one of the few ways of prolonging its death. Australian scientists are now freezing gametes from corals for regrowing in the lab. Corals are composed of many small organisms known as polyps. They plan to use these corals to replace the dead coral when the climate improves, since the current warming globe causes coral bleaching, which increases its chance of death. During coral bleaching, algae is expelled from corals, causing corals to turn white. Based on this example, not only may cryopreservation help save endangered species, but it might, in some distant future, prevent the extinction of the human race. While cryonics offers many advantages, it is not to say it does not have downsides. With the world’s population on the rise, cryonics will only contribute to further increasing the population of the next generations. Among the 7.6 billion people worldwide, 95% are breathing polluted air. 2.2 billion tons of waste is disposed into the oceans annually; in the last 200 years, 2.3 trillion tons of carbon dioxide was released into our atmosphere. These are only a few of the many statistics that show the damage we have done to earth. In other words, it may not be best to introduce cryonics to humanity at this time. The idea of going against God is a concerning matter for some. The reason that life is priceless is that every person has only one chance to make the best out of their lives. Cryonics, however, will ruin the concept of a fulfilling life. To put into perspective, if your life were a movie, cryonics would be the pause button of the remote control. In theory, you can undergo cryonic preservation as many times as you want. Additionally, there are many unanswered questions: What if fossil fuels become depleted we can no longer keep these patients under sub-zero temperatures? What if a cryopreserving organisation goes bankrupt? What will happen to the patients? What if you are released from cryonic suspension way after the death of your friends and family? 9
Application of Science
What our Future Holds Cryonics heavily depends on medical advancements. Over the past two decades, scientists have found a faster cure to hepatitis C, the invention of bionic limbs, and the artificial pancreas. In other words, the releasing from cryonic suspension does not seem very far off. Cryonicists predict that the very first revival will occur around 2040. However, how society will receive this scientific discovery will be another story.
Bibliography 1. “Cryonics.” Transhumanism, perfectionextended.weebly.com/cryonics.html. 2. “Cryonics: Leap of Faith Taking Chances at Life? - The News Geeks (TNG).” The News Geeks, 3 Oct. 2017, www. thenewsgeeks.com/cryonics-leap-faith-taking-chances-life/. 3. “Freezing Life: Cryogenics Is The Last Hope For Many Endangered Species.” Singularity Hub,31May2017, singularityhub.com/2011/12/06/freezing-life-cryogenics-is-the-last-hope-for-many-endangered-species/#sm.00001ue a0dcyppdxkv4a9v81se52x. 4. “Frozen Body: Can We Return from the Dead?” BBC News, BBC, 15 Aug. 2013, www.bbc.co.uk/science/0/23695785. 5. Stolzing, Alexandra. “Will We Ever Be Able to Bring Cryogenically Frozen Corpses Back to Life? A Cryobiologist Explains.” The Conversation, 19 Apr. 2018, theconversation.com/will-we-ever-be-able-to-bring-cryogenically-frozencorpses-back-to-life-a-cryobiologist-explains-69500. 6. Varma, Anuradha. “Are Frozen Embryos Real People?” The Indian Express, Wednesday, April 25, 2018, 21 Feb. 2017, indianexpress.com/article/lifestyle/health/are-frozen-embryos-real-people-4535790/. 7. Watson, Stephanie. “How Cryonics Works.” HowStuffWorks Science, HowStuffWorks, 8 Mar. 2018, science. howstuffworks.com/life/genetic/cryonics3.htm. 8. “Facts About Environmental Issues.” The World Counts, www.theworldcounts.com/stories/Facts-AboutEnvironmental-Issues.
10
Déjà Vu
Déjà Vu
The feeling of having already lived through something Elizabeth Lou (Y12) I believe you all know the song Déjà Vu by Post Malone and you might wonder what it means. Is it something made up by a Psycho? How could we experience something we haven’t gone through before? First of all, the French word ‘déjà vu’ translates to ‘already seen’.[3]According to Dr Anne Cleary[1], a Professor in the Cognitive Psychology Program at Colorado State University, defines the term as “a feeling of having experienced the current situation before, despite realising that the situation is new”. It is very common among those aged 15-25, and almost two thirds of the global population has experienced this beautiful mystery of the brain. It is common amongst 15-25 years old because Déjà Vu is an indication of a healthy mind. The difference between a real memory and déjà vu is that moment when you realize you should not be having that feeling of recognition. That realization may be the thing that older brains lose. Maybe older people have déjà vu just as much as younger people but they are worse at spotting the difference between a real memory and a mind glitch.[4][5] One of these groups contains people who have a condition called “temporal lobe epilepsy.” [2] Epilepsy causes brain cells to send uncontrolled electrical signals that affect the brain cells around them, and sometimes even all brain cells. These signals produce a knock-on effect which results in a seizure that makes people with epilepsy briefly lose control of their thoughts or movements. Seizures start in the temporal lobe which is responsible for making and remembering memories located beneath the lateral fissure on both cerebral hemispheres. Most patients experience ecstatic 11
Application of Science
hallucinations which have been described for many years in the medical literatures. According to Russian journalist Fyodor Dostoevsky[8], he describes of his own seizures where he splits among many of his characters. He would suddenly be arrested and cry, “God exists! God exists!”. He would feel that he was in heaven and that everything was unified and made sense. It could sometimes be followed by convulsions (A medical condition where body muscles contract and relax rapidly and repeatedly.) In these ecstatic hallucinations, there is a sudden transport of joy and also a sense of being transported to heaven or into communication with God. These seem intensely real to some people. In one of the case quoted in the book Hallucinations[6], a bus conductor in London who as he was punching the tickets, suddenly felt that he was in heaven and told this to all of his passengers. He remained in a very elated state for three days, and it sounded as if he was in an almost postictal mania (An altered state of consciousness after an epileptic seizure when the brain is recovering from the trauma of the seizure. It is characterized by drowsiness, confusion, nausea, hypertension, headache or migraine.) Then he continued on a more moderate level being deeply religious, until he had another events of seizures three years later and he said that cleared his mind. Now he no longer believes in God and angels, in Christ, in an afterlife, or in heaven. Interestingly, the second conversion to atheism carried the same elated and revelatory quality as the first one to religion. A lot of patients refused to take medication and some of them even found ways of inducing their own seizures because they believe what they feel is lucid. People with temporal lobe epilepsy often report déjà vu before having a seizure. This indicates that déjà vu might be linked to the temporal lobe of the brain. Déjà vu could be a mini-seizure in the temporal lobe in people who do not have epilepsy, but one that does not cause any other problems because it stops before going too far. This links back to the idea that déjà vu might be caused by a strong feeling of familiarity, as it is signaled by brain cells in the temporal lobe. However, it is noticed by another part of the brain called the Frontal Lobe which has a function for making rational decisions, ensure that all the signals coming to it makes sense. Due to the absence of a stimulus, 12
Déjà Vu and the frontal lobe will cancel the signal sent. For example[7], during our sleep, we might feel that dreams are realistic. This is a result of a weakened frontal lobe, where logic is compromised during dreams. Hence, the dreamer would accepts bizarre situations and shifts in time and space as if it is real. On the other hand, a research that has been done by Dr. Anne Cleary [1] , in which she found out many people claim that déjà vu is accompanied by a feeling of knowing what will happen next (premonition). She proposed that déjà vu could either be caused by spontaneous brain activity or a memory that failed to be recalled, and that déjà vu often occurs in spacial resemblance. Therefore, virtual reality was put into use in her experiment where both completely unrelated scenes and configurally similar scenes were shown to individuals. A series of turns were shown to them in one scene and they were told to predict the next turn in another scene. She discovered that déjà vu was more likely to happen in configurally similar scenes, and that people had no predictive ability at all regarding the next turn. Hence, she came to a conclusion that there might be a ‘déjà vu illusion’ which ‘déjà vu’-states bias people to think they know what will happen next, even when they do not. However, people who travel more or watch more movies might be more likely to experience déjà vu as there are more resources to be recalled, since they might have experienced a very similar scene previously. It could also be a situation that we have actually experienced but have since forgotten. Nonetheless, the cause of Déjà vu is still not certain, as it is very difficult to conduct further research on it due to its numerous variations, as well as difficulty in replicating the scenes in the lab. You might be the first scientist to find out the real cause!
References 1. https://www.youtube.com/watch?v=nFAvUkjba-Q Déjà vu | Dr. Anne Cleary | TEDxCSU 2. https://kids.frontiersin.org/article/10.3389/frym.2015.00001 3. https://en.wikipedia.org/wiki/D%C3%A9j%C3%A0_vu 4. https://www.newscientist.com/article/2101089-mystery-of-deja-vu-explained-its-how-we-check-our-memories/ 5. https://curiosity.com/topics/young-people-experience-more-deja-vu-and-that-says-important-things-about-thebrain-curiosity/ Hallucinations by Oliver Sacks 6. https://scottnevinssuicide.wordpress.com/category/pre-frontal-lobe-hallucinations/ 7. https://www.huffingtonpost.com/dreamscloud/dreams-feel-real_b_5652045.html Epilepsy in Dostoevsky’s novels by Hans Berger Clinic for Epilepsy, Oosterhout, The Netherlands. 8. https://www.ncbi.nlm.nih.gov/pubmed/2348590
13
Application of Science
Exploring the Origins of Gold
and other heavy elements which emerge from neutron star collisions Callum Sharma (Y9) This article will present methods used by scientists to find the origins of gold. Gold comes from neutron stars, the inner cores of post-supernova stars. When two neutron stars collide (merger), the impact of the merger ejects neutrons which clump together, creating lighter elements and subsequently heavy elements. Furthermore, there is a brief period in which intense gamma-ray bursts, gravitational waves, and light are emitted, all of which could be detectable from Earth. In 2016, scientists detected gravitational waves and gamma rays from a neutron star merger. At that instant, scientific collectives competed to find the collision’s source and its afterglow. Eventually, scientists found that the merger cooled and faded over time and its glow changed from a navy blue to a less bright crimson red, indicating the creation of heavier elements such as gold.
Theory It Begins with Neutron Stars A supernova explosion is the most dazzling explosion is the sky. It releases more energy than does our Sun in its whole existence, and takes place when a star has run out of fuel. The remains from the explosion collapse to create what will become a neutron star core. Under its own gravity, the core shrinks down to 20 kilometres wide, but it still has a mass of up to five times that of the Sun’s.[7] At the time of this development, the protons and electrons form neutrons, which is why this new incredible core is called a neutron star. A Neutron star is comprised of neutrons that have no charge. It has a strong core and a liquid mantle, which gives it a magnetic field about a trillion times as strong as the Earth’s.[7] Neutron stars have North and South magnetic poles and they emit high-energy beams, which are commonly made from material from an accompanying star. [7] Neutron Star Collides Collision begins when two colossal neutron stars spin around one another like dancing partners. Their immense dense and mass spiral together and they begin to scrape against each other. This brief contact sends gravitational waves throughout the universe. 1. Finally, the neutron stars clash, and an array of electromagnetic waves are propelled out through the galaxy, including x-rays, gamma rays, radio waves, visible light and infrared rays: 2. X-rays have a wavelength of 0.01 to 10 nanometres and are commonly used by hospitals to assess damage inside your body.[3] 3. Gamma rays come from the radioactive decay of an atoms nucleus, containing highenergy photons. Gamma rays are used to kill cancer cells, sterilise medical equipment and applied in radioactive tracers.[14] 4. Radio waves have a frequency of up to 1011 Hz and are used for long-range communication. 5. Infrared rays’ wavelengths are longer than visible light but shorter than radio waves, from 0.8 micrometres to 1 millimetre.
14
Exploring the Origins of Gold
Images clockwise from bottom left: Supernova (Credit: NASA/Dana Berry) Neutron star compared to size of Manhattan [7] Neutron star (Credit: NASA/Dana Berry)
Creation of Gold Scientists estimate that neutron star mergers occur approximately once every 10,000 years in the Milky Way.[15] As the neutron stars spiral around each other and smash together, the collision ejects neutrons which coalesces into a warm cloud, creating light elements. Kasen, an acclaimed astrophysicist describes the post-collision as “It’s just a low-density mushroom cloud of debris flung out at a few tenths of the speed of light. After a day this cloud of debris [would have] spread out over something the size of the solar system”.[2] Later, after the light elements gathered up together, they fused into heavy elements such as gold which has more neutrons than the lighter elements. Over time, some elements and materials are eager to mix and react with other materials to form oxides, mixtures, and compounds. [2] But gold stays in its original form as molten metal. Huge chunks of gold then cooled with other solids to form separate layers of rock and metal. These huge chunks collided and embedded themselves within asteroids which carried around them around the universe. [2] The asteroids then collided into each other to create the core of Earth. After this, large pieces of asteroids collided with the Earth to give another layer of metal to the crust of Earth, which researchers say added a ‘late veneer to the globe.’[2] The heat in the interior of Earth kept separating metals, and hot springs and volcanoes brought the metals close to the Earth’s surface, which is where we mine metals, including gold, today.
Evidence and Proof The origins of gold were discovered from groundbreaking research in 2016 and 2017. Scientists found that that gravitational waves and gamma-ray bursts were emitted after the collision of neutron stars. Gravitational Waves Gravitational waves are ripples in spacetime, made from the movement of massive objects. 15
Application of Science They are extremely feeble, making them exceptionally challenging to find. Due to these challenges, Albert Einstein, in his study of relativity, was initially doubtful if they actually occurred. However, further study by Einstein led to his understanding of gravitational waves in 1916 supporting his concept of general relativity.[7] Some events that release gravitational waves are the mergers of binary star systems, such as white dwarfs, black holes and neutron star collisions. Recently, in 2016, researchers detected the first evidence of gravitational waves, using the Laser Interferometer Gravitational-Wave Observatory (LIGO). This development earned three scientists (Rainer Weiss, Barry Barish and Kip Thorne) the 2017 Nobel Prize in Physics. In 2017, Ryan Foley and his small team from Santa Cruz won the race to find the light from the source a gravitational wave event, coming from two neutron stars. Foley et al discovered it was happening in a galaxy 130 million light-years away called NGC 4993. The gravitational wave was named GW170817. The signal of the gravitational waves had a colossal bulk of energy and Mansi Kasliwal of the California Institute of Technology in Pasadena depicted it as “something like a billion times the luminosity of the Milky Way”. The essential proof that the gravitational-wave event occurred from a merger was that it released enormous energy, it being the longest such event distinguished to this date.[8] Another indication that it came from a merger was the mass of the objects making the gravitational waves. “The frequency of gravitational waves depends on the mass of the substances that make them. The higher the frequency, the lower the mass,” Kasliwal said. “The two merging objects that generated this new signal were about 1.3 and 1.5 times the mass of the sun, which is typical of neutron stars.”[8]
Cataclysmic collision(Credit: Dana Berry, SkyWorks Digital) 16
Exploring the Origins of Gold
The frequency is slowly increasing with respect to time, and shoots up when time goes to zero
The gamma rays positioned detected by the Fermi GBM Gamma Ray Bursts Gamma-ray bursts are a barrage of gamma-ray light that last a very brief amount of time; they are the most dynamic form of light.[10] They last anywhere from a few milliseconds to a few minutes, and are hundreds of times brighter than a typical supernova. They are also about a trillion times as bright as the Sun. When a GRB happens, it is briefly the brightest source of cosmic gamma-ray photons in the observable Universe.[10] The Fermi Gamma-ray Space Telescope spots the bursts 240 times every year. Gamma rays signal to scientists on earth that something significant is happening in space. There was a two-second-long gamma-ray burst when the neutron stars collided which was picked up by satellites. This was one of the closest to Earth ever seen, but it was very faint. Before LIGO, researchers could see these gamma-ray bursts but couldn’t be sure of what they were or how 17
Application of Science
Graph showing that the redder end of the spectrum carried on for longer than the bluer end far away they were happening. “We suspected, first of all, that short gamma-ray bursts came from the merger of neutron stars but had no other way of telling,” said Imre Bartos from the University of Florida.[12] Now, incoming gravitational waves, combined with mathematical analysis, allowed scientists to figure out the direction and distance of the merger. Light Emissions Charles Patrick was the first scientist to observe the afterglow, described it as ‘a bright blue dot embedded in a giant elliptical galaxy, a 10-billion-year-old swarm of old, red stars about 120 million light-years away.’ The early blue display would come from a circle-like object at 10 percent of light’s speed, Later the red display would come out from very neutron-based materials which were cast out quickly from the neutron star poles as they collided, Metzger describes it ‘like a toothpaste squirted from a tube.” The letter from Kasen describes the modelling that they have done to prove that, soon after the merger, there will be high-frequency light (blue) from the lighter elements; and later on, there will be a lower-frequency light (red) from heavier elements.[13]
References 1. https://futurism.com/astronomy-picture-of-the-day-021714-double-quasar/ by Futurism February 17 2014 2. https://www.popsci.com/neutron-star-gold/ by Mary Beth Griggs October 19 2017 3. https://en.wikipedia.org/wiki/X-ray 4. https://www.scientificamerican.com/article/gravitational-wave-astronomers-hit-mother-lode1/ by Lee Billings on Oct 16 2017 5. https://phys.org/news/2017-10-neutron-star-smash-up-discovery-lifetime.html by Marlowe Hood October 16 2017 6. https://futurism.com/neutron-stars-shine-bright/ by Brad Jones February 26 2018 7. https://futurism.com/whats-the-difference-between-pulsars-quasars-and-magnetars/ By Futurism February 17 2014 8. https://kids.kiddle.co/Gravitational_wave
18
Exploring the Origins of Gold 9. https://www.space.com/38471-gravitational-waves-neutron-star-crashes-discovery-explained.html by Q.Choi October 16th 2017 10. https://imagine.gsfc.nasa.gov/science/objects/bursts1.html 11. https://news.ucsc.edu/2017/10/neutron-star-merger.html By Tim Stephens October 16th 2017 12. https://gizmodo.com/let-s-break-down-what-that-monumental-neutron-star-coll-1819613829 By Meandelbaum October 17th 2017 13. https://www.nature.com/articles/nature24453.pdf?origin=ppub 14. https://en.wikipedia.org/wiki/Gamma_ray 15. https://www.ligo.org/science/Publication-S6CBCLowMass/
19
Charles
Ryan
Application of Science
Lifestyle and Cardiovascular Disease Yin En Tan (Y12) According to the World Health Organisation (WHO), Cardiovascular diseases account for 17.7 million deaths each year, and roughly 31% of all deaths [1], make it the one of the leading cause of deaths around world. However, the good news is that the majority of these conditions are preventable. The following treatments are not medical therapies or medications, but in fact simple lifestyle changes that we can make. In this article I will introduce three simple lifestyle changes and the scientific support behind them. Firstly, I would like to clarify what are cardiovascular diseases. Cardiovascular diseases are conditions that affect heart or blood vessels. These conditions can be chronic, such as hypertension, coronary artery disease and peripheral arterial disease. On the other hand, they can be acute, such as heart attacks and strokes. According to the World Health Organisation (WHO), cardiovascular diseases account for 17.7 million deaths each year, roughly 31% of all deaths. This makes it one of the leading causes of deaths in the world. However, the good news is that the majority of these conditions are preventable. These preventions are not particular medical therapies or even medications; they are simple lifestyle changes that we can make. In this article I will introduce three simple lifestyle changes and the scientific support behind them.
Eat more oily fish! [2] The first notion that oily fish are preventive of cardiovascular diseases came from research that observed a strikingly low rate of heart attacks and strokes in the Inuit population, which consumes a lot of oily fish. It was later discovered that the rich omega-3 fatty acid content of these oily fish were responsible for its beneficial effect. The body can’t produce these essential fatty acids so they must be obtained from food. Its mechanism of action in the body is diverse but its key function is to improve blood lipid composition. To put it simply, omega-3 fatty acid helps increase the amount of good fats, high density lipoprotein (HDL), and helps decrease bad fat, low density lipoprotein (LDL). Before exploring what HDL and LDL are, it would be of note to first explain what triglyceride and cholesterol are. Triglyceride is essentially one of the two most common biological unit of what we call ‘fat’, the other being cholesterol. These two units of fat differ in their function in that triglyceride is simply a store of fat whilst cholesterol has specific roles, such as synthesising hormones. Triglyceride and cholesterol are transported around the body attached to proteins, and when they do so they are called lipoproteins. Depending on its density, lipoproteins are then classified into High Density Lipoproteins (HDL) or Low Density Lipoprotein (LDL). LDL is considered unhealthy because it deposits triglyceride and cholesterol in the blood vessels, which contributes to formation of fatty plaques that occlude blood flow. Meanwhile HDL is considered healthy because it scavenges for deposited fats in the blood vessels and removes them. So how does omega-3 fatty acid do this? Omega-3 fatty acid is believed to alter the levels of LDL and HDL at the genetic level. However, its exact mechanism of action is still being explored. Nevertheless, its benefit is strongly evident, and in the UK their population is recommended to eat at least 2 portions of fish per week. 20
Lifestyle and Cardiovascular Disease
Less salty food! [3] We always hear about salty foods such as potato chips, popcorn, and many other junk food being harmful for people with high blood pressure and that salt(sodium chloride) should be restricted in our diets. But what does salt do to our bodies? Hypertension, or high blood pressure, is one of those conditions that can lead to other cardiovascular diseases, which salt is the main culprit of this condition. Hypertension is very common, since the WHO estimates that 40% of adults have this condition, so being conscious of our salt intake is relevant to all of us. The main ingredient of concern in dietary salt is essentially sodium ions. When sodium ions are absorbed into the blood, it increases the solute concentration and therefore the osmolality of blood (where the water molecules from around the body will be drawn into blood by osmosis). This extra volume of water causes pressure to rise in the blood vessels, and puts strain on blood vessels which in turn affects the heart, which has to pump against this high pressure. Furthermore, a high pressure in blood vessels over a period of time causes damage in the capillaries (aneurysm) of the kidneys and may cause many kinds of kidney problems. With such consequences in mind, the WHO recommends 5 grams of salt per day, which is just about a teaspoon. However, a study in Hong Kong showed that the average adult here consumes 10 grams, double of what WHO recommends. I believe one of the cause for this high intake is because we consume salt unknowingly from many sources. Fast food, processed food, some vitamin pills and savoury snacks are all examples of salty dietary sources. Therefore, one simple measure we can take is to check nutritional labels to be aware of how much salt we are consuming. [5]
More physical exercise! It is no secret that physical exercise is beneficial for the human body in numerous ways, not just cardiovascular health. However what is the scientific basis for the cardiovascular benefits for physical exercise? [4] Exercise makes your heart contract stronger, meaning more blood can be pumped around easier. As an analogy, imaging starting to weight train. Over a period of time, you will be able to gradually increase the weight you lift. Initially 10kg required your full effort whereas now you can lift much more. 10kg will no longer require your full effort and you can lift it with ease. It is a similar principle with the stronger heart. The heart will be pumping at ease when at rest causing blood pressure to lower and also reduce its workload. By doing so, it is beneficial because that means the heart requires less oxygen and nutrient to function, allowing a higher reserve capacity. You might wonder how a stronger contraction of the heart would lead to a lower chance of cardiovascular diseases? Well exercising reduces formation of fatty plaques. Exercise has been shown to increase HDL and lower triglyceride levels which is correlated with reduced fatty plaques, similar to what omega-3 fatty acid does. Another mechanism is through nitric oxide. The increase systemic circulation of blood from exercise increases blood flow in skeletal muscles and cardiac muscles. Increased blood flow stimulates the endothelium to release nitric oxide, which is a molecule that has a whole host of beneficial functions: It widens include blood vessels, is an anti-inflammatory agent, it is an antioxidant, and an anticoagulant. Fatty plaques is considered to be an inflammatory and oxidative 21
Application of Science process so nitric oxides are useful in preventing their development. Anti-coagulation is also a very important function. One of the risks of fatty plaques is that it can rupture to trigger a blood clotting response called atherothrombosis. Atherothrombosis can severely occlude blood vessels which causes heart attacks and strokes. [6]
Conclusion In conclusion, Cardiovascular diseases comprises a significant portion of worldwide mortality, having a reputation of being deadly. However, it is important to be aware that simple lifestyle changes such as diet and exercise can go a long way in preventing these diseases, and also learn about the anatomical mechanism behind. It is never too late to make these changes, and even if you can’t manage to make all of them, picking up a few lifestyles changes might greatly improve your health!
Resources 1. 2. 3. 4. 5. 6.
http://www.who.int/cardiovascular_diseases/en/ https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3279313/ http://www.cfs.gov.hk/english/programme/programme_rdss/FAQ_Sodium_Public.html http://www.who.int/gho/ncd/risk_factors/blood_pressure_prevalence_text/en/ http://www.bloodpressureuk.org/microsites/salt/Home/Whysaltisbad/Saltseffects https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3251175/
22
The Future for Fecal Microbiota Transplantation
The Future for Fecal Microbiota Transplantation Hoi Lam Wong (Y12) Everyone has their distinct and diverse gut flora and it has been scientifically proven that healthy gut flora is responsible for a person’s overall health. Many patients with digestive disease rely on antibiotics, but excess use will lead to long-term alteration to a healthy microbiome as bacteria will develop multidrug resistant genes. Feces is the excretory product of digestion, however most overlook the function of feces and its potential to become a treatment. The history of fecal microbiota transplantation (FMT) dates back to ancient China. Ge Hong, a Chinese researcher was the first recorded person to use this treatment 1700 years ago. His medicinal treatment was known as “yellow soup” and it was given to those who suffered from food poisoning and diarrhea. By the 16th century, the Chinese developed a variety of feces-derived medicine to treat systemic symptoms such as fever.[3] Around the same time, the Bedouin community were consuming the feces of their camels as a remedy for bacterial dysentery. Acquapendente, an Italian anatomist and surgeon came up with the term “transfaunation,” the transfer of gastrointestinal content from a healthy to a sick animal. The first use of fecal enemas in humans for the treatment of pseudomembranous colitis (swelling of the large intestine due to overgrowth of Clostridium difficile bacteria) was reported in 1958 by Eiseman. Interest in fecal microbiota transplantation is increasing due to new research into the gut microbiota, and this treatment has been improving the lives of many patients. It is certain that new applications of FMT will follow in the near future.
What is Fecal Microbiota Transplantation? Fecal Microbiota Transplantation, also known as stool transplant, is currently in early stages of research and development. It is the process of transplanting fecal bacteria from a healthy individual to a diseased recipient, to treat Clostridium Difficile infection and ulcerative colitis. The ultimate goal of fecal transplant is to replace the patient’s gut flora.[1]
Methods of Faecal Microbiota Transplantation In all procedures, the donor’s feces contains a healthy community of bacteria which in
23
Application of Science turn improve the quality and diversity of gut flora of patient. Donors are carefully screened and tested for any transmissible diseases such as HIV. Patients undergoing FMT must remain on their Clostridium difficile infection (CDI) antimicrobials until 2-3 days before the procedure. The day before the procedure, patients must be administered for a bowel preparation to remove all feces in the colon regardless of how FMT is performed. To maximise chances of success, the donor’s feces should be used within 8 hours of extraction. [4] Colonoscopy Colonoscopy involves the transplantation of donor’s feces into the colon of patient from the anus. Enema An instrument used to inject a solution infused with treated fecal bacteria into the large intestine through the rectum. Orogastric tube Orogastric tube is a tube used for enteral feeding. This is suitable for patients who are uncomfortable ingesting feces through the mouth. The tube passes from the mouth to the stomach, unlike nasogastric tube where it passes through the nose instead. Capsules The feces is processed until only bacteria remains. It is then concentrated inside three layers of gelatin capsule. The capsules are soluble in water so are easily dissolved in the gut.[6] In a study published in the Journal of the American Medical Association, researchers found that consuming capsules for 12 weeks is as effective as colonoscopy. The study included 116 patients who were suffering from Clostridium difficile infection. Half of the patients received colonoscopy treatment while the other half received capsules. After 12 weeks, 96% of patients in both groups were free from the infection. [5]
Gut diseases Clostridium difficile infection (CDI) The increased use of antibiotics may lead to a potentially life-threatening infection, CDI, caused by the bacterium Clostridium difficile, which triggers colitis, a serious inflammation of the colon. At times, young people can develop this infection even without being exposed to antibiotics. Simply failing to wash your hands thoroughly after being exposed to the bacteria can lead to infection. Symptoms range from constant diarrhea to rapid heart rate. If it is not treated immediately, the infection could advance and affect blood pressure, kidney function and the overall health of the patient. Ulcerative colitis Ulcerative colitis is one of the common types of inflammatory bowel disease. It starts with a bacterial infection causing inflammation linked to ulcerative colitis. However, after the infection has gone, the immune system continues responding, perpetuating the inflammation . Symptoms can vary from diarrhea to suppressed growth of children. Anti-inflammatory drugs such as 5-aminosalicylates and corticosteroids are often the first step used to treat ulcerative colitis. The drugs reduce the production of prostaglandins (group of lipids that causes inflammation), and protect the lining of the stomach and intestines. 24
The Future for Fecal Microbiota Transplantation
Recent studies Recently, researchers investigated the effects of fecal microbiota transplantation on 20 patients with ulcerative colitis. Two donors were screened and prepared to maximise the diversity of bacteria in the transplant. Around 50 grams of feces was mixed with a nonbacteriostatic saline to prepare an infusate, the mixture was then strained through a gauze and given to the patient.[8] After four weeks, the researchers collected every patient’s fecal samples and performed rectal biopsies to measure the diversity of their microbiome and their body’s immune response. The researchers found that after four weeks, every patient’s microbiome became similar to their healthy donors’ and there was a significant improvement in the diversity of the gut flora. Moreover, the treatment reduced signs of symptoms for 35% of the patients, and 15% of the patients achieved clinical remission. The result is promising, although further research is needed to add to the small sample size of 20 patients.[2] There are several risks of fecal microbiota transplantation. One of the greatest risk is weight fluctuation. Recently, the journal article published in Open Forum Infectious Diseases reported that a woman has dramatically gained weight after a fecal transplant from her daughter. The woman started with a BMI of 26 (overweight) and sixteen months later she had a BMI of 33 (moderately obese) and three years after that a BMI of 34.5 (moderately obese). [7]
Conclusion The potential of fecal microbiota transplantation is still being uncovered by scientists. Today, this treatment is curing millions of patients with gut diseases and is likely to be developed to treat a wider variety of diseases in the future.
References 1. Mirsky S. “Fear Not the Fecal Transplant” Scientific American https://www.scientificamerican.com/article/fearnot-the-fecal-transplant/ 2. Weill Cornell Medicine (2017) “Fecal Microbiota Transplant is safe and effective for patients with ulcerative colitis” Science Daily https://www.sciencedaily.com/releases/2017/04/170427091746.htm 3. P.F. de Groot, M.N. Frissen, N.C. de Clercq, and M. Nieuwdorp. (2017) “Fecal microbiota transplantation in metabolic syndrome: History, present and future” Gut Microbes https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5479392/ 4. Arnoiadis, Olga C.; Brandt, Lawrence J. (2013) “Fecal microbiota transplantation: past, present and future: Current Opinion in Gastroenterology” LWW. https://journals.lww.com/co-gastroenterology/Fulltext/2013/01000/Fecal_ microbiota_transplantation___past,_present.14.aspx 5. Chen A. (2017) “Getting a fecal transplant in a pill might be just as effective as colonoscopy” The Vergehttps://www. theverge.com/2017/11/28/16709318/fecal-microbiota-transplant-colonoscopy-capsule-clostridium-difficile-infection 6. University of Calgary. (2017) “poop pill” capsule research paves the way for simpler C. difficile treatment” ScienceDaily https://www.sciencedaily.com/releases/2017/12/171201091026.htm 7. (2018) “5 Dangers & Risks of FMT (fecal microbiota transplantation)” Natural Home Remedies- DIY treatment for Better Health https://www.homeremedyhacks.com/risks-fmt/ 8. David A Johnson. (2013) “Fecal transplantation for C difficile: A how to guide” Medscape https://www.medscape. com/viewarticle/779307_3
25
Application of Science
The Possibility of a Space Junk Disaster and its implications for future space operations Gisele Lajeunesse (Y10) and Jeremy Wu (Y12)
Abstract Thousands of launches since the dawn of the Space Race have led to a growing field of space debris. Most space junk is found in two zones: the low-Earth orbit and the geostationary orbit.[1]
Fig. 1 Animation of space junk by ESA Low Earth Orbit, an area of outer space around Earth that encloses all orbits below 2000 km, is the home of the International Space Station and of other thousands of satellites. [2] According to NASA, It is estimated that there are 600,000 pieces of space junk ranging from 1 cm to 10 cm, and on average one satellite is destroyed each year. Half a million pieces of space junk are estimated to be currently being tracked orbiting the Earth and a million or more too small to track, resulting in a layer around the Earth which is 4 inches thick.[3]
Introduction In 1978, Kessler and Cour-Palais published the paper Collision Frequency of Artificial Satellites: The Creation of a Debris Belt.[4] The paper concluded that if the past growth rate in the catalogued population continued, around the year 2000 a more hazardous population of small debris would be generated as a result of fragments from random collisions between cataloged objects. This new source of debris would quickly produce a hazard that exceeds the hazard from natural meteoroids, and over a longer period of time, the growth in small debris would become exponential, even if a zero net input rate in the catalogue is maintained. Shortly after the publication, the term “Kessler Syndrome” was introduced. It was mainly used to describe the future collisional cascading mentioned in Kessler’s publication. 26
The Possibility of a Space Junk Disaster
Concept of collisional cascading The concept of collisional cascading[5] can be traced to studies of the origin of the solar system, ring formation around planets, and the origin of meteoroids and meteorites from asteroids. Fundamental orbital mechanics predict that any two orbiting objects that pass through the same distance from the objects that they are orbiting about represent an unstable condition. The condition is unstable because the two objects will eventually collide and break up into a number of smaller fragments, creating an even larger number of objects sharing the same distance, and therefore increase the collision rate. The number and size of the smaller fragments depend on the collision velocity, which mostly depends on the orbital inclinations of the object. A higher inclination will result in a higher collision velocity and consequently the more numerous smaller objects would more frequently break up larger objects. Early in this collision process, most of the total area of the population is in the larger objects, so that collisions between larger objects dominate the process of turning large objects into a distribution of smaller objects. The current population of man-made objects in Earth orbit is in this early collision process and represent an increasing hazard to spacecraft operating in these regions. Over a much longer period of time, the resulting very large number of smaller objects shifts the total area to be dominated by smaller objects so that collisions between much smaller particles begin to dominate the process. In addition, each collision reduces both the inclination and eccentricity of the population, until eventually only a disk of orbiting dust remains, leaving a ring around the equator of the central body, much like the ring around Saturn. If the ring is sufficiently far from the central body, the gravitational forces within the ring begin to dominate, allowing the dust to coalesce into a planet around the sun, or into a moon around a planet. This final process is similar to that described by Hannes Alfvén with his “apples in a spacecraft” analogy[6], where all the loose apples in a spacecraft will eventually end up in the center of the spacecraft. However, a ring is not likely to ever form in low Earth orbit because atmospheric drag will remove dust particles long before their inclinations approach zero. Unfortunately, as has been concluded by a number of investigations, atmospheric drag will not remove larger collision fragments at a rate faster than they can be generated by the current population of intact objects. Consequently, certain regions of low Earth orbit will likely see a slow, but continuous growth in collision fragments that will not stop until the intact population is reduced in number. There are three independent components of the predictions that can be examined: 1. The frequency of collisions between catalogued objects. 2. The consequences of collisions. 3. The rate of atmospheric decay of collision fragments.
Incidents of space crash leading to debris As companies and government agencies continue to launch more spacecraft, concerns are mounting about the likelihood of a “Kessler Syndrome” event occuring. In 2009, concerns were raised when a hypervelocity collision occurred between two satellites the Iridium 33 and Cosmos-2251 colliding at a speed of 42,120 km/h.[7] The collision destroyed both 27
Application of Science Iridium 33 (owned by Iridium Communications Inc.) and Cosmos 2251 (owned by the Russian Space Forces). The Iridium satellite was operational at the time of the collision. Cosmos-2251 was launched on June 16, 1993, and went out of service two years later, in 1995.[8]
Fig. 2 Debris field after 20 minutes
Fig.3 Debris field after 50 minutes
Fig. 4 Debris field after 50 days NASA estimated that the satellite collision created approximately 1,000 pieces of debris larger than 10 centimeters (4 inches), in addition to many smaller ones.[9] By July 2011, the U.S. Space Surveillance Network had cataloged over 2000 large debris fragments.[10] NASA determined the risk to the International Space Station, which orbits about 430 kilometres below the collision course, to be low.[11] By December 2011, many pieces of debris were in a steady orbital decay towards Earth and expected to burn up in the atmosphere within one or two years.[12] In 2016, Space News listed the collision as the fourth biggest fragmentation event in history, with Iridium 33 producing 628 pieces of cataloged debris, of which 364 pieces of tracked debris remain in orbit as of January 2016.[13] 28
The Possibility of a Space Junk Disaster
Other major notable incidents include the 2007 Chinese anti-satellite missile test[14] and the reentry of the Tiangong-1 modular space station in 2018.[15]
Fig. 5 Reentry of Tiangong-1
Fig. 6 Chinese anti satellite missile test 29
Application of Science
The Consequences of Collisions Thirty years ago, there was little data on the consequences of collisions[16] between large manmade orbiting objects. The existing hypervelocity data were mainly the results of tests conducted to improve spacecraft protection from meteoroids, or to understand the fragmentation of rocks on the lunar surface or in the asteroid belt. Early models drew from that data. However, in the past 30 years, ground tests have been conducted to understand both the resulting size distribution from collisions in orbit, as well as to determine the threshold for a catastrophic breakup. Furthermore, military tests involving an intentional collision in orbit have provided additional data. The results of these tests generally confirmed the early models, but also improved them and offered new insight on both the mass of the fragments and the orbits into which these fragments are ejected. For the purpose of classifying collisions by the amount of debris generated, the consequences of collisions between catalogued objects can be divided into three types: 1. Negligible non-catastrophic These collisions do not significantly affect either the long-term or short-term environment. This type of collision produces a negligible amount of debris and therefore has been ignored in past modeling. However, now that we can identify collisions between catalogued objects can be identified, these collisions will not be confused with more important ones. When a fragment collides with a thin surface and nothing else, then the total mass of debris generated is limited by the mass of the fragment, which is usually small. The Cerise collision[17] is an example. Had Cerise not been an operational spacecraft, where the operators were able to determine that only the gravity-gradient stabling boom had been severed, this event would likely have been assumed to be a more important non-catastrophic collision, producing much more small debris. 2. Non-catastrophic These collisions contribute only to the short-term environment. In general, a non-catastrophic collision is one between a fragment and an intact object and will generate an amount of debris that is about 100 times the mass of the impacting fragment. A significant fraction of the mass goes into sizes that are too small to catalogue, yet pose a hazard to operational spacecraft. Only a few of the fragments may be large enough to catalogue, and therefore do not represent a significant contribution to long-term collisional cascading, but can represent a significant short-term contribution to the hazard of operational spacecraft. 3. Catastrophic This type of collision contributes both to the short-term and long-term environment. A catastrophic collision produces a small fragment population similar to the non-catastrophic collision, plus a population of larger fragments that do significantly contribute to collisional cascading. From the combination of ground tests and on-orbit tests, it has been concluded that the energy threshold for a catastrophic breakup is 40 Joules per gram of target mass.[18]This corresponds to a target mass to projectile mass ratio of 1250 at 10 km/sec. In addition, the same tests concluded that 90 to 100 of the generated fragments are large enough to catastrophically break up another target mass of the same size. Therefore, catastrophic collisions are important to both the short-term and long-term orbital environment.
Model Prediction Given that there is the potential for an exponential grown in the debris population due to collisions, the question becomes what are the conditions that will lead to this growth. Models[19] can 30
The Possibility of a Space Junk Disaster be run to predict the environment many years into the future. The LEGEND model can be used for this prediction. LEGEND is a full-scale, three-dimensional, debris evolutionary model that is the NASA Orbital Debris Program Office’s primary model for the study of the long-term debris environment. The model provides debris characteristics (number, type, size distribution, spatial density distribution, velocity distribution, flux, etc.) as functions of time, altitude, longitude, and latitude. In addition, LEGEND includes both historical simulation and future projection components. Populations included in the model are active and spent satellites, rocket bodies, breakup fragments, mission-related debris, and Sodium-Potassium (NaK) droplets, making it possible for the minimum size (diameter) threshold in the model to be as small as 1 mm.
LEGEND Model Instability Predictions The model is three-dimensional in order to simulate possible future environments as accurately as possible. Consequently, each run of the model will predict a different future environment, but many runs can be averaged to determine the average, or most probable, future environment. Also, because it is three-dimensional, the potential contribution of different objects toward the future environment can also be evaluated. During the 200 years shown in Figure 7 the results appear to be a runaway environment. However, the assumption of “no future launches” allows the 500 intact objects currently in this band to slowly be removed by atmospheric drag and collisions. The LEGEND model also confirmed that the altitude band found to contribute the most collision fragments was between 900 km and 1000 km. An examination of the contribution to collisions by inclination concludes that two clusters of inclinations within this altitude band contribute more heavily to the number of collision fragment, those around 83 degrees and 99 degrees.[20] Collision probabilities are the greatest between any two objects when the sum of their inclinations is near 180 degrees.[21] Fig. 7 LEGEND Model Prediction of the Number of Catalogued Objects between 900 km and 1000 km. Assumes no launches after 2004.
31
Application of Science
Proposed solutions According to ESA, several methods have been proposed to take care of the thousands of fragments orbiting Earth, like lasers, tugs, drag enhancement devices, momentum exchange tethers, and more.[22] The 2010 National Space Policy of the United States directed NASA and the Department of Defense to pursue additional research efforts to remove debris.[23] Unfortunately, the cost and complexity - not to mention legal concerns - have so far prevented the proposals from becoming reality.
Controlling growth of debris growth in the LEO (low-Earth orbit) Realistically, there are only two ways[24] for an intact object in Earth orbit to avoid an eventual catastrophic collision: Either be removed from orbit or get out of the way of an approaching object while in orbit. Both techniques may be necessary to control the growth in LEO debris. (i) Post Mission Disposal and Active Debris Removal The international community has slowly adopted the voluntary policy of Post Mission Disposal (PMD). The PMD guidelines require a payload or upper stage to be removed from orbit within 25 years after its operational life. This can be fairly easily accomplished on a new payload or upper stage by utilizing existing propulsion systems or by installing a device to lower its orbit sufficiently so that it re-enters within 25 years. However, few objects already in orbit have that capability. Consequently, since the current environment is already above a critical density, even 100% compliance with these guidelines would not prevent the debris environment from increasing. Therefore, some Active Debris Removal (ADR) of objects currently in orbit is required to control the growth in LEO debris. The issue now becomes one of minimizing the number of objects required to be removed in order to prevent the environment from exceeding an acceptable level. Again the LEGEND model was used to determine the amount that individual intact objects are likely to contribute to the population of collision fragments.[25] After excluding obvious noncontributors to future collision fragments, LEGEND adopted the criterion, Ri(t), to determine which intact objects were most likely to contribute to the future environment, as defined in equation 1, where Pi is the probability of collision for object i at time t, and mi is the mass of object i. Ri(t) = Pi(t) x mi
(1)
This criterion is applied to the intact objects in LEGEND, and those with the highest value of R are assumed to be removed at a given rate. Figure 8 shows these results, assuming 90% compliance with PMD, with ADR rates of 2 objects/year and 5 objects/year. Note that PMD plus a removal rate of 5/year will prevent the number of catalogued fragments from increasing beyond the current catalogue.
32
The Possibility of a Space Junk Disaster
Fig. 8 LEGEND Model Predictions of the Number of Catalogued Objects for 3 Scenarios: PMD only, PMD plus ADR of 2 objects per year, and PMD plus ADR of 5 objects per year If there is less than 90 % compliance with PMD, then the rate of ADR would have to be increased. However, unless an inexpensive technique can be developed to perform ADR, it would be much more cost-effective to require mandatory compliance with PMD. (ii) Active Collision Avoidance As with PMD, Active Collision Avoidance (ACA) is not a total solution to controlling the growth of future debris, since most of the current population does not have the capability to maneuver. Additionally, tracking and position prediction have not been optimized for this purpose. However, if given sufficient resources, it could be a partial solution for future operational payloads. To become a realistic option, prediction accuracy must be improved in order to minimize the number of false maneuvers and gain the confidence of the payload operators. Just as debris removal concentrates on the more probable future debris sources, ACA could also concentrate on the more probable sources, reducing the burden of both PDM and ADR. The largest uncertainty in predicted satellite position is the down-range position, which is critical to predicting a potential collision between two objects with velocity vectors perpendicular to one another. However, near “head-on” collisions between objects with inclinations of 83 and 99 degrees were found to be major contributors to future collision debris, and these types of encounters are less sensitive to the down-range uncertainty. Moreover, the objects of greatest concern are more massive objects above 900 km altitude, reducing down-range uncertainty compared to less massive objects at lower altitudes. Consequently, if collision avoidance is optimized for the purpose of preventing collisions of operational payloads that are in orbits most likely to contribute to future collision debris, then ACA could become a significant contributor toward controlling the growth of debris in LEO.
Conclusion There is little doubt that the result of the so-called “Kessler Syndrome” is a significant source 33
Application of Science of future debris, as predicted over 30 years ago. Although new operational procedures have been developed over this period that have slowed the growth in orbital debris, these procedures have not been adequate to prevent growth in the debris population from random collisions. A more focused collision avoidance capability may help, but without adherence to current guidelines and an active debris removal program, future spacecraft operators will face an increasing orbital debris population that will increasingly limit spacecraft lifetimes.
References 1. NASA Safety Standard 1740.14, Guidelines and Assessment Procedures for Limiting Orbital Debris. Office of Safety and Mission Assurance. 1 August 1995. 2. “IADC Space Debris Mitigation Guidelines”. Inter-Agency Space Debris Coordination Committee, 15 October 2002. 3. Garcia, Mark. “Space Debris and Human Spacecraft.” NASA, NASA, 14 Apr. 2015. 4. D.J. Kessler and B.G. Cour-Palais, “Collision Frequency of Artificial Satellites: The Creation of a Debris Belt”, Journal of Geophysical Research, Vol. 83, No. A6, pp. 2637-2646, June 1, 1978. 5. American Astronautical Society, (AAS 10-016): “ The Kessler Syndrome: Implications to future space operations”, 10 February 2010. 6. Hannes Alfvén, “Motion of Small Particles in the Solar System”, Physical Studies of Minor Planets, NASA SP-267, pp. 315-317, 1971. 7. Marks, Paul (13 February 2009). “Satellite collision ‘more powerful than China’s ASAT test”. New Scientist. 8. “Russian and US satellites collide”. BBC News. 2009-02-12 9. Oleksyn, Veronika (February 19, 2009). “What a mess! Experts ponder space junk problem”. 10. “Orbital Debris Quarterly News, July 2011”. NASA Orbital Debris Program Office. 11. Dunn, Marcia (February 12, 2009). “Big satellites collide 500 miles over Siberia” 12. Orbital Debris Safely Passes International Space Station (Web Broadcast). National Aeronautics and Space Association. 2012-03-23 13. “10 breakups account for 1/3 of catalogued debris”. Space News. April 25, 2016 14. Is China’s Satellite Killer a Threat? (Tech Talk) Archived February 22, 2011 15. Loria, Kevin. “China’s First Space Station Was Decimated in a Fireball over the South Pacific - Here’s How It Went from Launch to Crashing Back to Earth.” Business Insider, Business Insider, 2 Apr. 2018 16. Klinkrad, Heiner. Space Debris Models and Risk Analysis. Springer Berlin, 2014. 17. “History of On-Orbit Satellite Fragmentations” (PDF). NASA Orbital Debris Program Office. June 2008. pp. 368-369 18. D.S. McKnight, Robert Maher and Larry Nagl, “Fragmentation Algorithms for Satellite Targets (FAST) Empirical Breakup Model, Version 2.0”, Prepared for DOD/ DNA Orbital Debris Spacecraft Breakup Modeling Technology Transfer Program by Kaman Sciences Corporation, September 1992. 19. NASA, www.orbitaldebris.jsc.nasa.gov/modeling/evolmodeling.html. 20. J.-C. Liou and N.L. Johnson, “Instability of the present LEO satellite populations”, Advances in Space Research, Vol. 41, pp. 1046-1053, 2008 21. 13 Shin-Yi Su and D.J. Kessler, “A Rapid Method of Estimating the Collision Frequencies between the Earth and the Earth-Crossing Bodies”, Advances in Space Research, Vol. 11, No. 6, pp (6)23-(6)27, 1991 22. Esa. “Mitigating Space Debris Generation.” European Space Agency, www.esa.int/Our_Activities/Operations/ Space_Debris/Mitigating_space_debris_generation. 23. “National Space Policy.” Office of Space Commerce, www.space.commerce.gov/policy/national-space-policy/. 24. Esa. “Mitigating Space Debris Generation.” European Space Agency, www.esa.int/Our_Activities/Operations/ Space_Debris/Mitigating_space_debris_generation. 25. J.-C. Liou, N.L. Johnson, N.M. Hill, “Controlling the growth of future LEO debris populations with active debris removal”, Acta Astronautica, in Press.
Image Sources 1. 2. 3. 4. 5. 6. 7. 8.
http://www.esa.int/Our_Activities/Operations/Space_Debris https://en.wikipedia.org/wiki/2009_satellite_collision#/media/File:Collision-50a.jpg https://upload.wikimedia.org/wikipedia/commons/b/b9/Collision-20a.jpg http://spaceflight101.com/close-orbital-encounter-january-7-2017/ http://spaceflight101.com/abandoned-chinese-space-laboratory-re-enters-harmlessly-over-pacific-ocean/ https://www.telegraph.co.uk/news/worldnews/1539948/Chinese-missile-destroys-satellite-in-space.html http://webpages.charter.net/dkessler/files/Kessler%20Syndrome-AAS%20Paper.pdf http://webpages.charter.net/dkessler/files/Kessler%20Syndrome-AAS%20Paper.pdf
34
Experimental Research
Failed Experiment: Lessons Learned The importance of accuracy and precision in science Julien Levieux (Y10)
Introduction-what is accuracy and precision? Precision and accuracy are very important in science. Precision is the degree to which several measurements provide answers very close to each other. So, a metre ruler is said to be less precise than a 30 cm ruler, because in theory, a 30 cm ruler will provide a smaller range of results than a metre ruler. Accuracy describes the nearness of a measurement to the true value. The degree of precision and accuracy usually determines the success of an experiment, and whether scientific results are credible enough to justify a hypothesis. Small changes in precision can lead to huge difference in final results. This was what happened when Max Ho and I conducted an experiment to make soap a few months ago, and the experiment failed. So, in this article, I wish to demonstrate the importance of precision and accuracy.
Our soap making experiment The process of making soap is known as saponification. It involves reacting potassium or sodium hydroxide with natural fats (triglyceride). The triglyceride is made up of glycerol and fatty acids. The sodium hydroxide combines with the fatty acids to form salts of long chain fatty acids (soap) and glycerol.
36
Failed Experiment: Lessons Learned
From left, third column are the values for NaOH; fourth column for KOH.
To make soap, how much sodium hydroxide is needed to react with a known amount of triglyceride is first calculated. There is a ratio called the saponification value, which shows how many milligrams of sodium hydroxide react with 1 gram of triglyceride. This ratio is usually listed in a table of saponification values. The decimal values show the number of grams of sodium hydroxide or potassium hydroxide needed to saponify 1 gram of triglyceride. So, to calculate the amount of sodium hydroxide needed, multiply the saponification value by the mass of the triglyceride. E.g., the saponification value of deer tallow is 0.139: 1. So, if there were 100 grams of deer tallow, 13.9 grams of sodium hydroxide would be needed to react with it. To do so, the mass of the triglyceride must first be measured. In theory, the two compounds will react in this proportion to form soap.
The purpose of our experiment The experiment’s purpose was to see whether it was possible to make soap in a more sustainable way, by obtaining the triglyceride from excess school cafeteria oil that was used in cooking fish n chips. The experiment also aimed to validate whether soap made in this way would have similar cleansing properties as soap made industrially. 37
Experimental Research Additionally, the experiment also aimed to show that, excess cooking oil could be reused in a productive manner, and that fats need not be extracted from animal fat (which involves killing an animal) or from acres of crop, which is wasteful. However, as we failed to make soap in the end, so we could not test its cleansing properties.
Our experimental procedure 1. Calculate the amount of sodium hydroxide and triglyceride needed by using the saponification value. 2. 0.135 grams of sodium hydroxide is needed to react with 1 gram of corn oil 3. Multiply by 100 to obtain values for 100 grams of corn oil, which would be 100: 13.5 4. Convert mass values to volume based on the relationship between mass, volume and density Ď =m/V. 5. Our concentration of sodium hydroxide was 1 mol/dm3. Therefore, we can assume its density to be 2.13 g/cm3. Using the relationship, 13.5 grams of sodium hydroxide equates to 6.34 cm3 of sodium hydroxide. 6. There is no specific density for corn oil, so, look for another triglyceride with the same saponification value-olive oil. The main fatty acid found in olive oil is oleic acid. Thus, assume density of corn oil to be the same as oleic acid: 0.895 g/cm3 7. Convert 100 grams of corn oil to volume using the density relationship, which would equal to around 112 cm3 of corn oil. 8. With the calculated volume values, react the liquids accordingly.
The result The experiment failed as no soap was produced. Here is an image that illustrates this below.
38
Failed Experiment: Lessons Learned
What went wrong? The density of corn oil was assumed, though not proven to be the same as oleic acid, and it turns out that the density of corn oil fluctuates with different temperatures. Additionally, the ratios were between masses-and not volumes. The masses of the solutions should have been measured instead.
Analysis of mistakes made to illustrate the importance of accuracy and precision The two methods: the mass and volume methods, will be compared. To do so, the same source of fat-corn oil-will be used. 0.135 grams of sodium hydroxide is needed to saponify 1 gram of corn oil. The values obtained from our original method (the volume method) would be: 1. Volume of corn oil: 112 cm3 2. Volume of sodium hydroxide: 6.34 cm3 The procedure for the mass method is as follows: 1. Measure out mass of corn oil, e.g. 100 grams. 2. Multiply the values in the 1:0.135 ratio by 100 to get 13.5 grams of sodium hydroxide. 3. 13.5 grams of sodium hydroxide would thus be needed to react with 100 grams of corn oil. This would equal to around 6.34 cm3 of sodium hydroxide if we were to use the density relationship. It can be seen that both methods give similar volume readings. However, it would be more precise to use the mass method, as both liquids were heated, and the density of corn oil fluctuates with temperature. Because of this, the volume calculation for corn oil was imprecise-as fluctuating densities will give a bigger range of volume values. The mass method is not affected by fluctuating densities, because mass is constant. Furthermore, the measure of mass is usually more precise and accurate than that of volume, considering the fact that volume was estimated from an equation and was rounded to 3 significant figures. Accuracy, however, is affected by impurities in corn oil whether it is by volume or mass methods. This is because the corn oil was used in cooking fish n chips, so substances (particularly fish) dissolved in the oil. This affected the properties of corn oil, and it did not behave in the known manner.
Conclusion What have we learnt? We have seen the dramatic impact of a change to a controlled variable, such as fluctuating densities. We have also seen the impact of making small changes in conventional procedure so that we can save a few steps in our experiment. So, do not follow our procedure if you were to make soap in the future. Precision and accuracy are therefore very important in conducting a successful experiment. That is why science teachers emphasise it so much. It’s not just for good grades, folks.
Reference: Year 10 Student (Churchill) 1.
“Our Objective.” Decomposition Reaction (Resources) : Class 10 : Chemistry : Amrita Online Lab, amrita.olabs.edu.
39
Experimental Research in/?sub=73&brch=3&sim=119&cnt=1. 2. Saponification Chart, www.fromnaturewithlove.com/resources/sapon.asp. 3. “Saponification Value.” Wikipedia, Wikimedia Foundation, 9 Apr. 2018, en.wikipedia.org/wiki/Saponification_ value. 4. Upstate/USC_Upstate%3A_CHEM_U109%2C_Chemistry_of_Living_Things_(Mueller)/17%3A_Lipids/17.2%3A_ Fats_and_Oils. 5. Franklychemistry. “Esters 7. Finding the Saponification Value of a Fat.” YouTube, YouTube, 22 Dec. 2015, www. youtube.com/watch?v=qhlKzDPscWY. 6. “Saponification Table Plus The Characteristics of Oils in Soap.” Saponification Table and Characteristics of Oils in Soap, www.soap-making-resource.com/saponification-table.html.
40
Investigation on the Effects of Resonance Using Pendulums
Investigation on the Effects of Resonance Using Pendulums Greg Chu (Y13)
Experiment description and Hypothesis: To investigate the effects of the resonance using pendulums and that the amplitude of the vibration is greatest when the driven pendulum is forced to oscillate at its natural frequency.
Experimental Set Up and Apparatus:
Apparatus Used: Tall Stands, G-clamp, Metre Rule, Pendulum Bob, Strings, Slow Motion Camera, Position Tracker
Procedure The Length from centre of Driven Pendulum to pivot point is set as 547.02cm and h is 0.5, where h is the height from the centre of the pendulum to the line.
41
Experimental Research
*Anomalies are highlighted in Red
Figure 1. Amplitude against the length of Pendulum
Initial Evaluation From Figure 1, there is a spike between the value 520 and 572, meaning that the effects of the vibration are significantly increased in that area. This is because that range of value is near the distance of the natural frequency of the driven pendulum. It can also be seen that if the distance between the driving pendulum and its pivot is smaller, the vibration distance would be higher. 42
Investigation on the Effects of Resonance Using Pendulums In the graph, the point 547.80 has the highest vibration distance, which means that the driven pendulum vibrates better when it is closer to its natural frequency. The figure also presents that the increase in amplitude is not linear, but rather exponentially. This shows that if the distance between pivot and the driving pendulum is too far from the natural frequency, the change would be minimum in relative to when it is close. It is also observed that the graph is fairly symmetrical, and that the amplitude depends on how close the length of the pendulum bob. To observe this pattern, a plot of amplitude against distance of length of Pendulum is plotted.
Figure 2. Amplitude against distance from length of Pendulum Figure 2 shows the correlation between the difference in distance between the driving and driven pendulum with its pivot in relation to the vibration distance. We can clearly as the driven pendulum approaches its natural frequency, the vibration distance would increase drastically. However, if the distance between the driving and driven pendulum to its pivot is increased, the decrease is relatively minor when that distance is large already. There are 3 anomalies when looking from this graph. These anomalies were not rejected in the first round of data collection. This is as all 3 of the recordings for that data point are inaccurate. This could be due to the problem of tying of the strings, which was deemed a difficult task.
Mid Project Review and Improvements Problem #1: Dipping of the String To calculate the natural frequency, one of the components required is the length of the pendulum. However, from the set-up of the pendulum, there was a large dip in the string at the driving pendulum point; this was the most significant problem encountered, as it led to an imperfect 43
Experimental Research pendulum. This is due to the fact that the driving pendulum is too heavy to be supported. However, this problem could not be resolved by trying to hold up that particular point to prevent dipping, as an extra force would enter the system, increasing the recorded amplitude of the pendulum. Thus, the recorded amplitude of the driven pendulum may be higher.
Resolution (#1): From the diagram above, the distance of L is to be taken from the knot position to the driving pendulum. However, this is inaccurate due to the problem of dipping; as the ‘pivotal’ point sways (the knot point), this means that the knot point is not truly the pivotal point and thus should not be recorded as the distance of L. Through observations, when the pendulum sways both sides, the ‘L’ distance could be extended until it crosses over. This point is observed to always be on the point where the suspension string is tied to the stand. Therefore, the value of L taken is not from the centre of the driving pendulum to the knot position, but instead from the centre of the driving pendulum to the estimated pivotal point. Problem #2: Tension of the String The driving pendulum is relatively heavy, which could easily stretch the string. According to Hooke’s law, if the string is stretched above a certain point, it may be beyond the elastic region and become plasticly deformed. This would mean that the control variable would not be controlled, therefore making the experiment invalid. Resolution (#2): Since the string could be easily overstretched, the best way to resolve this would be to support the driving pendulum when not in use, which was done so with a clamp. As a result, the string would only have a force exerted on it whilst recording data, thus minimising the time in which the string experiences force, thus preventing the string from over-stretching.
44
Investigation on the Effects of Resonance Using Pendulums Problem #3: Environment where Experiment was held Two of the anomalies observed were due to the driven pendulum not swinging parallel linearly, but rather diagonally, meaning the distance recorded of the travel would be decreased. This also causes increased friction at the pivotal point, meaning more energy in the system is dissipated to heat. Additionally, if there is a current in the room, the pendulum may have experienced more air resistance, also causing the system to lose energy as waste energy. Resolution (#3): Since the surroundings do have negative effects on the experiment, a solution would be to turn off all air conditioning and close windows to minimise the effect of air current. Also, I added two plastic board in between the experiment to prevent air currents passing through the driven and the driving pendulum. Problem #4: Distance of h One of the anomalous results was due to the driving pendulum being at the incorrect height. If the starting height of the driving pendulum is not constant, different amounts of energy would be put into the system, and thus more or less energy in comparison would be induced into the driven pendulum, causing the distance of the vibration to not be constant. This affects the accuracy and the reliability of the results. Resolution (#4): Since the value of h would make a massive difference in finding the value L. The way to keep h constant is to use set squares to determine h. Although this may not be the most accurate way of doing so, this way is the easiest to execute and achieves satisfactory results. Problem #5: Movement of the Pendulum The fourth problem encountered is the movement of the pendulum. In many recordings, it is seen that the vibration distance of the driven pendulum in each vibration from the same recording is not constant. This is due to the fact that some of the energy transferred into the driven pendulum has been transferred back to the driving pendulum, in theory making the driven pendulum the driving pendulum. This would mean that the recording would be inaccurate. Resolution (#5): These inconsistent vibrations only happen after the first 2 swings. However, the first 2 swings would already be indicative of what the vibration distance would be. Therefore, to resolve this problem, the data taken would be the largest distance of vibration before these inconsistent vibrations, which means that the minimal amount of energy is given back to the driving pendulum. Problem #6: Recording of the Distance of Vibration In this experiment, a slow-motion camera is used. However, the problem with using a slow motion camera would be the presence of parallax error and relatively poor accuracy in terms of increments of distance. Resolution (#6): Instead of using a slow-motion camera, an alternative position tracker could be used. The tracker functions by emitting ultrasound waves in a frequency of 50 Hz. By using this, a more accurate position of the driven pendulum could be recorded. However, a difficulty of using this 45
Experimental Research would be that a slight change in direction of the pendulum or interference from surroundings could easily tamper the results. Therefore, 2 plastic boards were used to prevent any unwanted interference. However, little interference is inevitable.
Data Collection using the improved methods: This time, data was collected using the position tracker instead of a slow-motion camera.
The length from centre of driven pendulum to pivot point is set as 547.02cm and h is 0.5 (The distance of the driven pendulum to its pivot point has not changed as the quality has not changed. This is because the the mass of the driven pendulum is too small to cause any stretching.) 46
Investigation on the Effects of Resonance Using Pendulums
Table 2 *Anomalies are highlighted in Red Figure 3 shows the plot of amplitude against the length of the pendulum for the improved method
Second Evaluation: From figure 3, There is a clear spike between 530cm and 560cm, for the corresponding amplitude. Whilst the top of the graph is at 547.02cm as the x-value, this is as it is the natural frequency of the driven pendulum. This means that the vibration is significantly higher in this range, due to being closer to the distance of the natural frequency of the driven pendulum. This graph has similar patterns when compared to the first set of data. However, the values are generally greater by 2cm. A graph comparing the two sets of data is thus plotted.
47
Experimental Research
Figure 4 shows the merging of both plots From figure 4, the difference between the two experimental set-ups can be clearly seen. The improved experimental set-up clearly gives a smoother and more detailed graph. With the second set-up, the overall amplitude is increased, as a lot less energy is lost through air resistance and inaccurate movement of the driven pendulum. Additionally, the improved experimental set up allows more data points to be collected easily, which provides more data points around the maximum amplitude so that a better best fit lien could be plot. The shape of the graph of the improved version is also significantly steeper. This is likely due to the result of less energy being lost from the system by ways such as the loosening of the string, air resistance. Therefore, the improved version data is more reliable in supporting the hypothesis that the amplitude peaks as it approaches its natural frequency. From another graph, the correlation between the difference in distance between the driving and the driven pendulum can be seen.
48
Investigation on the Effects of Resonance Using Pendulums
Figure 5 shows the plot of amplitude against the difference of length of pendulum for the second evaluation. Figure 5 presents a dramatic increase in the vibration if the difference of distance between the driving and driven pendulum to its pivot is less than 20. This clearly reflects the fact that the vibration distance is increased exponentially and that, as the driven pendulum approaches its natural frequency, the vibration distance is increased drastically. This is consistent with the previous set of results,
Calculating the natural frequency: To calculate the frequency, this formula is used: f=1/T To find T (the period), this formula is used: T=2Ď€âˆš(â„“/g) g is the gravitational field strength, which is taken as 9.81 N/kg After the experiment has been improved and accurate data is collected, the frequency can be worked out easily using the length.
49
Experimental Research
Figure 6 shows a plot of amplitude against frequency From figure 6, we can see that at a frequency of 0.02131 Hz, which is the natural frequency of the system, the amplitude is peaked. This pattern is consistent with the plot of amplitude against frequency, shows that the previous conclusion that the natural frequency is the peak amplitude could be applied.
Figure 7 shows a plot of Amplitude against the Difference of Frequency 50
Investigation on the Effects of Resonance Using Pendulums
Figure 7 shows a general best fit line which is similar same exponential curve as before. It shows that the amplitude is greatly increased when closer to the natural frequency, especially when the difference in frequency is less than 0.0004 Hz.
Conclusion After the first experiment was done, the graph was interpreted and a general exponential shape was plotted. It was shown that when approaching the natural frequency of the driven pendulum, its amplitude would be significantly increased. However, if the amplitude is out of the natural frequency length, then the change would be rather insignificant. From the first experiment, six problems were encountered, these being: dipping of the string, tension in the string, environment of surrounding area, the distance of h, movement of the pendulum, and the recording of the distance of vibration. Therefore, after the second set-up of the experiment, these problems were addressed, leading to a more accurate and reliable set of data to be recorded. From the second experiment, 24 data points were selected, with 12 being the same as the first with 12 new data points with smaller increments, to hopefully show a clearer graph. From these 24 data points, after removing the anomalies, it was shown that the exponential increase was significantly more rapid, and the overall system should have a higher amplitude. After addressing all of these points, it can be seen that graphs from the first and the second set-up have the same shape. This shows that the hypothesis and the prediction do correspond, meaning that the initial conclusion made was logical. Also, from interpreting the graph showing the difference in the frequency and the amplitude, it also clearly shows that the amplitude peaks as it approaches the natural frequency, which makes the initial predictions true.
51
Experimental Research
To Investigate the Rotational Motion of a Rattleback Greg Chu (Y13), Courtney Lam (Y13)
Introduction The rattleback is a top that is semi-ellipsoid in shape and always spins in the anticlockwise direction, no matter what direction it is initially spun in. When the rattleback is spun in the anticlockwise direction, it continues rotating in the same direction, eventually slowing down and coming to a halt. On the contrary, when the rattleback is spun in the clockwise direction, it continues to spin in the same direction for several rotations, but it then begins to oscillate rapidly along its longitudinal axis, exhibiting the rattling behaviour which gives this top its name. As it rattles, it reverses its spinning direction and begins rotating in the anticlockwise direction until it stops. The rattleback is an object that has a ‘favourable direction’. If the rattleback spins in the favourable direction, it will spin continuously until the frictional force generated by the contact between the base of the rattleback and the spinning surface stops the spin. However, if it spins in the opposite direction, i.e. the unfavourable direction, the rattleback will try to revert back to the favourable direction. This is exhibited by the rattling of the rattleback as it finds its optimal height, at which it can begin to spin in its favourable direction. In this investigation, the favourable direction of the rattleback used is the anticlockwise direction.
Aim To record and compare the angular velocity of the rattleback when it spins in the favourable direction under two initial conditions: when it is spun in the anticlockwise direction and when it is spun in the clockwise direction. In the latter scenario, the rattleback will always revert back to the anticlockwise direction. For simplicity, the former condition will be referred to as anticlockwise direction, and the latter will be referred to as adjusted anticlockwise direction.
Safety Overall, the experiment does not pose any imminent dangers to the health of the scientists. However, there is an apparatus that requires careful attention: the rigid stand which is used to secure the camera is extremely heavy. To prevent it from falling off the table and causing injury, a G-Clamp will be used (add to) secure it to the table.
Methodology Experimental Apparatus Rattleback, Top Pan Balance, Vernier Caliper, Micrometer, Slow Motion Camera, Tracker Programme Calculating angular velocity from x and y coordinates The original trigonometrical equation: tan(φ) = y/x The changes are substituted into the original equation: tan(φ + δφ) = (y + δy)/(x + δx) Arctan both sides of the equation: φ + δφ = arctan((y + δy)/(x + δx)) Subtract the position angle: δφ = arctan((y + δy)/(x + δx)) - φ Divide the change in position angle by the change in time: ω = δφ/δt
52
To Investigate the Rotational Motion of a Rattleback
Method 1 Procedure Measurements of the rattleback have been obtained carefully and thoroughly with precise instruments so that they can be used to aid data collection via the tracker programme. Slow-motion camera was hand-held at a reasonable distance from the spinning plane to reduce parallax error. The spinning of the rattleback was filmed. The video was then uploaded to the Tracker program, where the x and y-coordinates of one tip of the rattleback were mapped out. The coordinates of only one end of the rattleback were recorded, as it was decided one tip could sufficiently represent the rotational motion of the entire rattleback. The use of the slow-motion camera is justified by one major advantage of using a slowmotion camera is the high frame rate of the footage. The slow-motion camera used throughout the experiment allowed for 240 frames per second. This impacts positively on the graphs resulted. Firstly, the number of gaps between each plot is significantly reduced. Secondly, the trend lines are more definite as there is a lot of data to clearly hint the trend line. Moreover, the effect of one anomaly will have a minimal effect on the overall trend line of a graph since in comparison with the large number of plots present, one anomaly is insignificant. Thirdly, the high frame rate largely increases the reliability of the data. It is important that the same person spins the rattleback for each trial so that the variable force applied onto the rattleback - is controlled. Thus making each rotation as identical as the others as possible. The Tracker Programme The programme, Tracker, is used to track the tip of the rattleback. It marks the point with an x-y coordinate, in which for this case, one x and y coordinate is 0.001mm (ÂŹÂą00001mm), resulting in a relatively low percentage uncertainty. The tracker programme would then use the x-y coordinate, the frame rate (submitted by the user) to calculate different functions, such as the position angle and the angular velocity. There are two functions provided by Tracker: a manual tracking and an auto-tracking. The manual tracking allows you to track the points frame by frame, whereas the auto-tracking traces the points by colour recognition. It recognises the pixels and finds the next point, frame by frame. The auto-tracking function consists of two indicators, an evolution rate and an auto-mark. The evolution rate allows the point to evolve, where higher evolution rate would allow rapid changes, but may cause drifting away from the indicator. The auto-mark level is how well the match has to be and is set manually by the user. If the matching score is too low, then the user would have to manually select that point.
53
Experimental Research
Figure 1 shows a screenshot of tracker programme in action Data Collection Measurements of the rattleback
Uncertainty Calculations As shown from the readings, it is reasonable that all of the repeats are completely identical. This means that the uncertainties would be the minimum scale of the instrument. In this case, it would be ÂŹÂą0.01mm.
54
To Investigate the Rotational Motion of a Rattleback 5.5 Processed Data
Figure 1a: Graph showing the motion of the rattleback relative to the coordinate axes The graph maps the spinning motion of the rattleback. The horizontal x and vertical y-axis of the graph reflectthe x and y coordinates of a tip of the rattleback respectively. The tip of the rattleback traces out an ellipse as the rattleback completes one full rotation. The anticipated graph should be a circle. However, the graph shown does not form a circle. There are also numerous anomalies. This suggests that many random errors arise from this method. In this trial, the rattleback completes several rotations. However, the base of the rattleback does not remain in a fixed position during these rotations. There is discrepancy in the x values on the left side of the graph, which means that the whole rattleback drifted slightly while it was spinning. Therefore, there was also translational motion in addition to rotational motion.
Figure 1b: Graph comparing the position angle throughout adjusted anticlockwise rotation using method 1 55
Experimental Research
Figure 1c: Graph showing the change in angular velocity during adjusted anticlockwise rotation The first graph shows the position angle of the rattleback during the rotation, and the second graph shows the change in its angular velocity. In both graphs, there are numerous of anomalies that cannot be disregarded when determining the trends. Hence, it is difficult to draw trend lines or infer any possible correlations. Evaluation of Method 1 Method 1 was unreliable and inaccurate, mainly due to human error. First of all, tracking of the rattleback’s rotation was done manually. In this method, the Tracker program was used to track the motion of the rattleback, by identifying one tip of the rattleback and pinpointing the changes in its x and y-coordinates. The digital ‘autotracker’ function could not be carried out as the program failed to clearly identify the tip of the rattleback, therefore tracking was done manually instead. This significantly created a high degree of inaccuracy and inconsistency. Secondly, the camera was hand-held, thus the video footage was unstable. Thus the movement of the rattleback would have seemed unstable too, which significantly interferes with the plotting of the graph as the Tracker program may have documented superfluous x and y- coordinate changes. Finally, the translational motion of the rattleback due to drifting may affect the resultant position angles and angular velocities. Therefore, we have not taken repeats using this method, as we believe that the results would have been unreliable and inconclusive. This setup needed modifications and improvements. To improve this method, excess friction needed to be reduced first. This will allow the rattleback to spin more smoothly. Secondly, both tips and the centre of the rattleback should be marked, so that these points would show up on the Tracker program, and the ‘autotracker’ function can be used. Moreover, there can be a reference point on the rattleback that is relative to the centre, so that the problem of drifting can be ignored. Thirdly, a clamp and a stand should be used to secure and stabilize the camera, ensuring that it is directly parallel to the spinning plane. The stand can also be manipulated to raise the camera, because increased distance, i.e. height, between the lens and 56
To Investigate the Rotational Motion of a Rattleback the rattleback can reduce parallax error. This will be further justified. Lastly, there should be a backlight to eliminate the shadows of the rattleback. This can ensure that ‘autotracker’ can work properly without any interference.
Method 2 - Improved Methods Additional Experimental apparatus Stand, G-clamps, Clamps, Plumbline and set squares, Visualiser Setup
In addition to all the new apparatuses, three different coloured points have been added onto the rattleback by board markers: a red dot on one tip, a blue dot on the other, and a black dot on the centre. The points on the tips will allow for 2 repeats for the ‘autotracker’ system per trial, reducing the effect of any random errors. It will ensure that the data collected is reliable. The experimental project guideline advises to measure the position angles using a 360˚ paper protractor. However, we have decided not to use the paper protractor method and relied solely on trigonometric calculations to determine the position angles through the use of the tracker programme and its coordinate axes. This is because the protractor paper will vastly increase the frictional force, which will affect the rotational motion of the rattleback. Additionally, the minimum scale of the protractor is 1˚, which will result in a large percentage uncertainty. This in turn decreases the accuracy of the data. Graph Interpretation Trial one for this method has been manually plotted to ensure that the auto-tracking function works fine.
57
Experimental Research
Figure 6ai: Trial 1: Graph showing (the) angle of rotation of rattleback when spun in favourable direction The angle of rotation is defined as the amount, in radians, by which an object is rotated anticlockwise about a fixed point. In this case, the fixed point is the base of the rattleback, which is the lowest point on its underneath surface that is in contact with the spinning surface. And because the favourable direction of the rattleback is anticlockwise, the angle of rotation is always positive when the rattleback is spun in this direction. Thus, the graph shows that the angle of rotation is always positive, and increases in a nonlinear manner.
Figure 6aii: Trial 1: Graph showing the angle of rotation of rattleback when spun in unfavourable direction 58
To Investigate the Rotational Motion of a Rattleback The graph shows that the angle of rotation of the rattleback decreases for the first second of the trial, when the rattleback is spun in the unfavourable, clockwise direction. The minimum of the graph, which lies between 0.8 to 1.3 seconds, is the point when the rattleback has stopped spinning and instead, is oscillating along its longitudinal axis in order to change its rotational direction. Thus, the angle of rotation at this point is at a constant value. The angle of rotation then begins to increase in a non-linear manner when the rattleback reverts to rotating in the favourable, anticlockwise direction. This section of the graph, i.e. when the rattleback rotates in the favourable direction, is identical to that of Figure 4ai.
Figure 6aiii: Trial 1: Graph comparing the angular velocity of rattleback spinning in the anticlockwise direction (green) versus adjusted anticlockwise direction (red) The graph compares the angular velocity of the rattleback when it is spinning in the anticlockwise direction versus the adjusted anticlockwise direction. The data points for the former is in green, while the latter is indicated by red. The comparison begins when the angular velocity of the rattleback in each scenario is the same at 4.02 radians per second. From this point onwards, the angular velocity of the rattleback when rotating in the anticlockwise direction decreases at a faster rate than when it is rotating in the adjusted anticlockwise direction, indicated by the steeper gradient of the green line.
Figure 6bi: Trial 2: Graph showing the angle of rotation of rattleback against time in seconds when spun in the favourable direction 59
Experimental Research
Figure 6bii: Trial 2: Graph showing the angle of rotation of rattleback when spun in the unfavourable direction
Figure 6biii: Trial 2: Graph comparing the angular velocity of rattleback spinning in anticlockwise direction (green) versus adjusted anticlockwise direction (red), against time in seconds In trial 2, the starting angular velocity is at 4.19 radians per second. The trend depicted by this graph is similar to that of trial 1, with the rattleback rotating in the anticlockwise direction experiencing a greater rate of decrease in angular velocity than the rattleback rotating in anticlockwise direction after reverting from clockwise direction. Compared to trial 1, the difference in the rate of decrease in angular velocity is significantly larger, with the rattleback that is constantly spinning in the anticlockwise direction stopping at 1.7 seconds after its angular velocity reached 4.19 radians per second, while the rattleback that adjusted its spinning direction took 5.2 seconds to decrease from 4.19 to 0.00 radians per second.
60
To Investigate the Rotational Motion of a Rattleback
Figure 6ci: Trial 3: Graph showing the angle of rotation of rattleback when spun in favourable direction
Figure 6cii: Trial 3: Graph showing the angle of rotation of rattleback when spun in unfavourable direction
61
Experimental Research
Figure 6ciii: Trial 3: Graph comparing the angular velocity of rattleback spinning in anticlockwise direction (green) versus adjusted anticlockwise direction (red) In this trial, the starting angular velocity is 4.10 radians per second. The trend of the data in this trial supports those displayed in trial 1 and 2, as the rattleback spinning in adjusted anticlockwise direction took a longer time to come to rest compared to the rattleback that was consistently spinning anticlockwise. Possible explanations for the results The data collected from the three trials all show that when starting at the same angular velocity, the adjusted anticlockwise direction of the spin takes a longer time to come to a halt, compared to the rattleback that was consistently rotating in the anticlockwise direction. Two explanations are offered to explain this. The rattleback oscillates in a seesaw motion, so one tip of the rattleback rises. When it reaches a certain height, gravitational potential energy causes it to fall, whilst pushing the other tip of the rattleback upwards at the same time. This motion repeats rapidly, generating frictional force until the rattleback stops oscillating and there is enough energy to change the rotational direction of the rattleback. This allows the rattleback to then continue to spin at a rate that is natural to the rattleback, and therefore can spin longer as its angular velocity would decrease at a slower rate. During oscillations, the contact between the rattleback and the spinning surface is reduced per unit time. As a result, there is less friction to slow the rattleback down, hence the angular velocity decreases at a slower rate. Due to the unbalanced distribution of mass in the rattleback, the spins have a wobbling motion. We believe that this motion actually ‘fuels’ the rattleback, thus decreases the rate of deceleration. Evaluation of Method 2 Method 2 yielded significantly more reliable data than Method 1, however there were still some limitations with this method. First of all, at the beginning of the experiment, attempts were made to limit the friction between the rattleback and the spinning surface. Theoretically, increased friction would interfere with the smooth spinning of the rattleback. However, we have come to realise that friction is a key element of the experiment. As when the rattleback spins in the adjusted anticlockwise direction, (a) frictional force is required for the rattleback to go into its rattling motion. To improve this setup, we have decided to remove the petri-dish and to spin the rattleback in the flat centre of the visualiser, thus allowing the rattling motion to not be intervened. 62
To Investigate the Rotational Motion of a Rattleback Another problem encountered is locating the ‘centre’ of the rattleback. For this method, the intersection of the horizontal and longitudinal midline of rattleback’s top surface marked its center point. This point was used as the origin in the tracker program, and this origin was used to identify and cancel out any translational motion. However, as the rattleback is a semi-ellipsoid, the mass distribution is irregular, thus the surface in which the rattleback spins on changes constantly throughout the spin. Therefore, the centre of mass of the rattleback is not necessarily at the marked center point. However, the effects of this problem (are)is relatively insignificant, and that the benefits that (are) is yielded outweighs them. It is also important to note that there are some minor limitations caused by the tracker program. Several anomalies (were) was observed across all the graphs generated by the tracker program, and these anomalies were caused by mismatching of points on the tracker programme. Additionally, some of the identified points have slightly been matched has been shifted, thus it has to be taken into consideration that these data points may not be as accurate than they should.
Method 3 - Finalised Method Procedure The first change to the finalised method is that the petri dish is removed. Secondly, we have not relied on the tracker programme to do the calculations, but instead to calculate the position angle and angular velocity separately. This is to make sure that the calculated readings are reliable, repeatable and reproducible. Thirdly, when tracking the rotation of the rattleback for trials of method 2, we set the automark level as 4. However, we later adjusted the automark level to 5 for trials of method 3. We did this to ensure that the matching score of the template and the tracked point is highly identical. Additionally, we improved our method by stopping the autotracker after tracking every 100 points to check whether the tracked points are still identical to the original template. This is because the template may evolve, hence resulting in shifting away from the indicator. To prevent this from happening, we have reduced the evolution rate to 5%. If this effect does occur and the template does change, then we will reanalyse the video to keep the template constant throughout the tracking process. Lastly, we have set increased the resolution of the video to 1080p from 720p, which stands for the amounts of the pixels in a frame. This is to allow more pixels to be analyzed and compared. This would allow a much higher precision when tracking the indicator of the rattleback using the tip of the object. Precautions In order to make the results as independent as possible, we have turned off all the air conditioning systems and fans in the room. This is because we would like to prevent the effects of drafts. Graph Interpretation The below figures are all repeats using method 3
63
Experimental Research
Figure 7ai: Trial 1: Graph showing the angle of rotation of rattleback when spun in favourable direction
Figure 7aii: Trial 1: Graph showing the angle of rotation of rattleback when spun in unfavourable direction
64
To Investigate the Rotational Motion of a Rattleback
Figure 7aiii: Graph comparing the angular velocities of anticlockwise motion (green) and adjusted anticlockwise motion (red) of trial 1 starting with the same angular velocity of 3.50 radians per second From figure 7aiii, you can see that the adjusted anticlockwise motion decelerates significantly slower than the anticlockwise motion, similar to the previous method. This shows that the phenomenon is consistent across multiple readings. Additionally, it has been observed that both direction has a more rapid deceleration after hitting 1.70 radians. This suggests that because it has less velocity, the ‘fueling’ effect has stopped. Thus the resultant force against the motion has increased. Moreover, the data points around the trendline for this adjusted anticlockwise reading is rather dispersed, this may suggest a possible inaccurate tracking of the tip of the rattleback. Thus, additional repeats are made to ensure that this random error could be reduced.
Figure 7aiv: Graphs comparing the general angular velocities of anticlockwise motion and adjusted anticlockwise motion of trial 1 From figure 7aiv, you can see that the points for the adjusted anticlockwise rotation is rather still, whereas in comparison, the anticlockwise rotation has more ‘wobbling’ elements. This supports theory 1, where the adjusted anticlockwise motion has caused the rattleback to spin in its natural frequency, where the anticlockwise motion is relatively unstable, thus having more surface contact, thus increasing the frictional force and therefore decelerating at a greater rate. 65
Experimental Research
Evaluation of Method 3
Figure 8a: Graph showing the motion of the rattleback of adjusted anticlockwise and anticlockwise 1 relative to its centre Looking at figure 8a, we can see that from using a reference point, the x-y graph is almost completely circular. This means that the angular velocity is calculated without the interference of translational motion. Thus, a much more precise way of measuring the change in angle, thus a much more reliable angular velocity can be calculated. When analysing the graph, we have been making constant references to the film. This is because we believe that different motions of the rattleback would give us different deceleration even with the same angular velocity at the point in time. This helps us explain the different patterns. Justification For both method 2 and 3, we have taken 3 repeats for each direction of the spin. The reason being is that it could increase the reliability and confidence of the results. However, unlike traditional experiments, we have decided not to take the mean of the points and plot one graph, but instead just keep all of the spins separate. This is because every spin is independent from each other, which would result in a different pattern. Errors There is one consistent systematic error that has occurred throughout the experiment is that there is a sudden decrease in angular velocity and shoot back up and the same points. This error is due to a coding error in the transfer of the video from the phone to the tracker programme, thus affecting the frame rate. However, this could be ignored as it is only a systematic error, and would not affect the general trend of the angular velocity. Observation of the experiment After doing many repeats, we have observed that no matter how much force is applied to spin it in the clockwise direction, the tip of the rattleback would always reach the same height.
66
To Investigate the Rotational Motion of a Rattleback
Figure 8b shows the maximum height the tip of the rattleback for 3 repeats 67
Experimental Research
From figure 8b, we can see that the tip reaches the highest point (around 1.30cm), which is around 2 millimeters higher than the depth of the rattleback. Although this an observation with data collected, no interpretations should be made due to insufficient sources.
Conclusion In this experimental project, we have explored deeper differences in rotational patterns of a rattleback beyond the crude observation that the adjusted anticlockwise rotation involves rattling whilst the anticlockwise rotation does not. The first method, although unreliable, helped us build a better, improved method by highlighting the errors in the apparatuses that we used and in our approach. As a result, we were able to collect reliable data and hence interpret them quickly using our second method. A major improvement made was the increase in the height at which the rattleback motion was filmed. The rigid stand was used to achieve this. We also decided to collect data through the aid of the tracker programme and its coordinate axes as obtaining position angles through trigonometric methods is more precise than using the 360Ëš protractor paper. In the second method, we discovered that the anticlockwise direction and the adjusted anticlockwise direction have different rates of decrease in angular velocity. This differs from our prediction. It also differs from our other expectation, since the angular velocity of the anticlockwise direction decreased at a faster rate than the clockwise direction, despite the absence of the supposed additional friction from the rattling motion. Thus, we proposed several explanations and amongst the three, we tested the explanation 1, which suggests that the rattling motion led to a smoother rotation, hence there will be less surface contact per unit time, thus less friction to decelerate the rattleback. When evaluating our second method, we realised that the use of a petri dish did not actually improve our method. In fact, it hindered the rotation of the rattleback because we figured that the rattling motion of the rattleback is the transfer of energy from its initial spin through friction. Thus, if too much friction is eliminated, then the rattling motion would not be optimal. Therefore, it was not included in our finalised method. When using method 3, the adjusted anticlockwise motion, resulted in a greater rate of decrease in the angular velocity. However, it is shown that the wobbling element played a significant role in the angular velocity, instead of the rattling motion. The wobbling was observed this time and there was more wobbling in the anticlockwise direction than the adjusted anticlockwise direction. This suggested that the anticlockwise motion was fueled by the wobbling motion, thus it stopped rotating earlier than the adjusted anticlockwise motion. In conclusion, our findings did not support our predictions. However, we have established a deeper understanding towards the mechanisms of the rattleback by exploring the effects of the wobbling and the rattling motions on the overall motion of the rattleback. But most importantly, we were able to achieve the ultimate goal of our experiment, which was to find the most effective, accurate and reliable way of tracking the rotational motion of the rattleback, after designing and carrying out three different methods.
68
A Review of Use of Fluoride on Dental Caries Prevention
A Review of Use of Fluoride on Dental Caries Prevention Justin Kwok (Y13), Supervised by Lotje Smith
Abstract Dental caries is one the most prevalent chronic diseases in the world and is the most common cause for tooth pain and tooth loss despite the fact that is easily preventable. Dental caries can further develop into more severe dental conditions which may cause irreversible harm to the patient both physiologically and psychologically. Fluoride is one of the most widely used chemical agents used to treat and prevent the disease. Since the first observation and studies by Dr Fredrick McKay and Dr Green V. Black (1909-1915) of the anti-caries action of fluoride, there has been significant scientific and technological development in the understanding and the use of fluoride to tackle dental caries. Fluoride is now commonly found in many dental products such as toothpaste, mouthwashes and in toothbrushes, all widely used to improve dental health and reduce development of dental caries. Furthermore, many developed countries and cities such as the United States and Hong Kong have adopted the used of fluoridated tap water to control the disease. However, fluoride overdose is a risk of using fluoride as treatment, potentially resulting in serious systemic consequences which can be fatal. This paper aims to consolidate and review the current knowledge of the use of fluoride to prevent dental caries including the mechanisms involved and the risks which may arise.
Dental caries Dental caries is a chronic condition caused by the localised chemical dissolution of all types of dental tissue, caused by changes to the dynamic equilibrium between the remineralisation and demineralisation of dental tissue [1,2]. Changes to the equilibrium are influenced by many factors, the main causes being pH changes in oral fluids, which are significantly determined by metabolic activity of cariogenic flora, and fluctuations in mineral saturation levels of oral fluids. A dental lesion is an area of dental tissue which has been damaged and a caries lesion is a lesion caused by dental caries. Caries lesions commonly develop in relatively mechanically static areas of the dentition where biofilms can accumulate without being removed for long periods of time. [1, 29] Areas particularly susceptible to carious lesions include the pit and fissures of a tooth, the posterior aspect of the tooth and the lateral borders between teeth.
Lesion classificiation Dental caries lesions can be classified by it’s anatomical position, recurrence, cavitation and activity. These classifications are very important in the identification and management of carious lesions. Lesions classified by its position on the tooth include: pit and fissure, smooth, enamel, root and approximal. Enamel and root caries lesions are types of smooth lesions. Enamel caries occur anywhere on the tooth surface, however root caries lesions are lesions found in areas near or on the tooth gingival border. Pit and fissure lesions are found on the superior surfaces of teeth of the inferior dentition and the inferior surfaces of the superior dentition. These lesions are most commonly found on the grooves and fossa of teeth ranging from the first bicuspid to the third molar. Approximal lesions are found in lateral interface between teeth, these lesions are very difficult to visualise due to their anatomical position. Visualisation is improved by the mobilisation of the adjacent tooth and the use of a mouth mirror depending on the location of the lesion. [1] Primary caries and recurrent or secondary caries are lesions classified by redevelopment of the lesion. Primary caries is a caries lesion which has developed on intact tooth surfaces while 69
Experimental Research recurrent lesions are ones that develop on dental tissue adjacent to previously filled tooth surfaces. Rampant Caries is characterised by multiple active carious lesions within the same oral cavity regardless of whether or not the lesions are primary or secondary. [1, 29] Lesions can also be classified as cavitated or non cavitated-lesions, which is whether or not the lesion has caused cavitation on the tooth surface. This classification is normally used to describe the extent of lesion development, the greater the extent of cavitation, the more developed the lesion is. [1] Finally, caries lesions can be classified by its activity, an active lesion is one which will further develop if not clinically managed. An inactive or arrested lesion is one which had previously formed and developed but its progression and development has arrested. The latter classification is the most difficult to determine since there are many factors which could affect the activity of the lesion. Furthermore a lesion may actively and continually switch between an active and inactive state.[1]
Figures 1 (top left), 2 (top right), 3 (bottom left) and 4 (bottom right) shows the development of rampant cavitated root caries lesions on the first and second premolars on the same patient [1]
Figures 5 (left) and 6 (right) Show examples of non-cavitated pit and fissure caries lesions, these lesions only found in teeth between 1st bicuspid and third molar [1]
70
A Review of Use of Fluoride on Dental Caries Prevention
Figures 7 (left), 8 (middle)and 9 (right) illustrate approximal non-cavitated Caries lesions, these lesions are particularly difficult to diagnose visually as seen in image 2.22, these lesions are better visualised by the mobilisation of the adjacent tooth as seen on image 2.20 and 2.21 [1]
Figure 10 (bottom) shows a cavitated Pit and Fissure lesion [1]
Lesion formation In order for dental caries to occur, dental plaque must be present on the tooth surface even though dental plaque itself is not sufficient to cause dental caries. Dietary factors such as the increased ingestion of sugars can increase the occurence and severity of dental caries[1,2,53,54]. The process of dental caries occurs due to the imbalance of the rate of dental demineralisation and remineralisation. Tooth surface minerals are constantly in a chemical dynamic equilibrium with oral fluids. [1] The action of dental demineralization and remineralisation is of a cyclic nature and the net demineralisation of dental tissue will cause dental caries. If the net demineralisation reaches a certain level, there will be an increase in porosity of the surface enamel structure leading to the formation of a white spot lesion. [1] The dental demineralisation mechanism is mainly caused by acids (mainly lactic acid) produced by certain acidogenic flora present in dental plaque which metabolises sugars and fermentable carbohydrates into acids. [3,33]
71
Experimental Research
Figure 11 shows a white spot lesions on Upper Central Incisors [1]
Dental Biofilm Dental Biofilm is a layer of naturally occurring oral flora adhesed on the tooth surface by and extracellular matrix. There is a huge range and diversity of bacteria in dental biofilm, however some studies have shown there are a few particular species and strains of bacterium which could cause dental caries. Currently known significant cariogenic microbiology include mutans streptococci: Streptococcus mutans, Streptococcus sobrinus and Lactobacilli [2,3,4,5,15,31,32]. There are increasing number of studies that show that there may be other bacterium which also causes dental caries [37]. All the cariogenic bacteria that were cariogenic are acidogenic, meaning they metabolise and ferment sugars to produce acid. They are also aciduric and can metabolise and survive in environment of low pH. Dental Biofilm forms very rapidly after prophylaxis or after birth when the teeth surfaces are absent of bacteria. The initial microbiology is hugely diverse and fairly harmless since relative proportions of bacteria are fairly equal. However, changes in the diet, in particular sugar consumption, can greatly affect the microbiology of dental biofilm. More frequent and increased sugar intake, particularly sucrose [49,55], increases the rate at which acidogenic bacteria produce acid [36,38]. The bacterial acidogenesis creates an increasingly acidic oral environment which causes a selection pressure for the biofilm flora, this kills any bacteria which cannot survive in an acidic environment. Since cariogenic bacteria are also aciduric, they can continue to thrive and multiply, thus increasing the proportion of cariogenic bacteria in the biofilm. Due to the increasing number of acidogenic bacteria, the rate of sugar metabolism and fermentation of sugar will increase, leading to even more acid produced, killing off even more non aciduric bacteria and the cycle repeats. The shift in proportion of cariogenic bacteria increases the chances that the biofilm will become cariogenic because of the increased amount of acid produced lowering the pH of oral fluids causing the remineralisation and demineralisation equilibrium to shift leading to net demineralisation[33,34,38].
Enamel Dental enamel is the most highly mineralised substance in the body with approximately 95% inorganic content [57], 1% organic matter and 4% water by weight[42,43]. It has little to no collagen content and has an organic matrix made up of around 90% amelogenin content[6]. A large proportion of the inorganic content of enamel is Hydroxyapatite Ca10(PO4)6(OH)2 [41,42] assuming the enamel has 72
A Review of Use of Fluoride on Dental Caries Prevention isolated from fluoride. However, enamel is rarely composed of just Hydroxyapatite, mineral ions from saliva are incorporated into the apatite structure forming a wide variety of apatite crystals structures with large variations in chemical and physical properties. Common incorporated ions include: F- , PO43- , CO32-[42, 43]. Hydroxyapatite crystals are formed from the below chemical equation: Ionic equation: 10Ca2+ +6PO43- + 2OH- --> Ca10(PO4)6(OH)2 Overall Equation: 10Ca2+ +6HPO42- + 2H2O --> Ca10(PO4)6(OH)2 + 8H+ [42] As seen in the overall equation, a large number of protons are generated which may inhibit the incorporation of other ions into the apatite structure and reverse the generation of hydroxyapatite if they remain in solution[42]. Thus ingested or cariogenic produced acids may lead to the break down of hydroxyapatite. 1 molecule of Hydroxyapatite forms a hexagonal unit which stack vertically to form rod shaped crystal prisms [6,39,42]. Hydroxyapatite crystals extend from dentine to the enamel surface[41]. These prisms are arranged parallel to each other and are separated by tiny inter-crystal spaces known as pores and creates a network of diffusion pathways. These diffusion pathways allows the interchange of dissolved minerals and ions present in dental fluids and also provides the pathway for fluoride ions into the surface enamel to form fluorapatite(Ca10(PO4)6F2)[40,41,57].
Demineralisation Demineralisation is caused by acid dissolution of dental tissue[6]. There are 2 main means of acid exposure to teeth: ingestion of acidic substances and the microbial production of acids. These acids can be further classified into extrinsic and intrinsic acids[48]. Extrinsic acids sources include soft drinks or fruit based drinks, acidic and/or sugary foods. Intrinsic acids may arise from diseases or medical conditions that cause frequent chronic gastric juice discharge into the oral cavity such as gastro-oesophageal reflux and Bulimia nervosa [2,6,44,45]. Stomach acid present in high concentrations in gastric juice have a pH of 1.2 which is much lower than the critical pH of both hydroxyapatite and fluorapatite[6]. This will lead to a much higher rate of demineralisation of tooth surface. Demineralised areas are more susceptible to bacterial colonisation, biofilm formation and increased chance of caries formation[6]. Furthermore, bacteria metabolise and ferment sugars to produce acid which increases the chances of a caries breach of the enamel layer into the dentinal layer[6]. Since Hydroxyapatite is soluble in acid, it will soften and become more prone to mechanical wear in acidic environments. There are 2 mechanisms which lead to the acid softening of hydroxyapatite: acid attack and/or Chelation with anions. [6,43] The action of acid attack depends on the pH of the solution in contact with the enamel surface. If the pH of the oral fluids is lower than 1, surface etching of enamel will occur [6]. This rarely occurs in dental caries and is more common in medical conditions such as gastro-oesophageal reflux where the enamel surface is exposed to highly acidic hydrochloric acid in gastric juices. At a pH of 2-4, enamel softening on the nanoscale occurs. The most common action of attack is the dissolution of enamel which occurs when pH of surrounding oral fluids decrease below the critical pH of 5.2-5.5[2,7,29,38,48]. The acid attack and chelation of enamel is commonly caused by the partial dissociation of weak acids, such as citric acid found in fruits and lactic acid produced by cariogenic bacteria. Weak acids causes chelation in 2 mechanisms, resulting in the removal of both phosphate and calcium ions from the apatite matrix [6,43], leading to a significant amount of demineralisation. Weak acids 73
Experimental Research partially dissociate in contact with saliva, into protons and anions. The protons (H+) react readily with phosphate groups in hydroxyapatite to form phosphate cations, these cations further react with calcium ions in saliva to form a calcium acid chelation complex[6]. This complex debonds mineral ions in the adjacent areas of hydroxyapatite lattice causing the complete release of ions in the apatite structure[6,43]. In weak carboxylic acids, such as, the anion RCOO- ion can cause calcium chelation, 2 anions bind to 3 calcium ions to form a soluble chelate complex. Since the chelate is soluble it dissolves into saliva leading to net demineralisation of the tooth. [6,43]
Remineralisation Remineralisation is the process of mineral precipitation and the reformation of apatite crystals on the enamel surface. Although the exact mechanism of dental remineralization is unknown and requires further study, the factors that affect remineralisation are well established. The rate of remineralisation is significantly affected by the pH, mineral saturation levels and secretion rate of saliva. Remineralisation only occurs if the pH of oral fluids in contact with the enamel is above neutral and if there is mineral supersaturation relative to mineral content of enamel[43]. Saliva acts a an antibacterial solution and a source of calcium and phosphate ions required for remineralisation since it contains high concentration of lysozymes, enzymes and is highly saturated with minerals[50,58]. Saliva also acts as a natural buffer solution and can neutralise acidic fluids surrounding the enamel surface [43,50,58]. It was observed that when saliva secretion increases, pH value increases above neutral promoting enamel remineralisation [42]. This results in the formation of salivary precipitin, which is a complex of calcium phosphate and a glycoprotein. Salivary precipitin is readily deposited onto dental plaque. Calcium phosphate is 8 to 10 times more soluble than calcium in the tooth and acts a sacrificial compound which dissolves before hydroxyapatite and reduces demineralisation. It also acts as another source of calcium and phosphate ions for dental remineralisation. [6]
Diseases and Conditions caused by Dental Caries Dental caries can both directly and indirectly causes a variety of oral diseases and conditions, however due to the underlying philosophy of this review, there will be a focus on oral diseases and conditions where dental caries plays a direct role. Irreversible pulpitis is severe pain of an area of several teeth or in some cases pain in the opposing jaw to the lesion[51]. Suitable clinical management would be either the removal of the tooth or endodontic treatment.[51] This is normally a sign of significant untreated dental caries where the chemical dissolution has penetrated both the enamel and dentine layer and enabling stimulation of pupal nerves which is the source of pain. If irreversible pulpitis is not treated, acute apical periodontitis may arise, involving symptoms of severe pain in a particular tooth or teeth which is worsened if pressure is applied to the tooth. Acute apical periodontitis is the necrosis of pulpal tissue caused by the inflammation of dental pulp in the pulpal cavity. This condition should be managed by the prompt extraction of the affected tooth or aggressive endodontic management. [51]
In many cases, patients diagnosed with acute apical periodontitis may also be diagnosed with acute apical abscess[51]. An abcess is a cavity filled with accumulated pus which if formed by the liquefaction of tissue caused by bacterial infection. Acute apical abscess is caused by the localised intraoral infection of the root canal [52]. Symptoms include mild to severe pain and swelling of area of infection, systemic symptoms consistent with bacterial infections such as a fever may develop. Acute apical abscess can lead to development of more severe complications which may pose a risk of death. Reported cases of death caused by acute apical abscess include the spread of infection 74
A Review of Use of Fluoride on Dental Caries Prevention resulting in sepsis or airway obstruction[52].
Dental Caries Prevention The prevention of dental caries can be approached in a few ways: the removal and reduction of dental plaque, the reduction of acid production by cariogenic bacteria and increase of remineralisation and decrease of demineralisation of the enamel surface. These approaches can be achieved by both chemical and physical methods. The most direct approach is to mechanically reduce or remove dental plaque which is achieved by tooth brushing and dental prophylaxis by a dentist. Reduction or removal of dental plaque helps to control the growth and development of dental plaque and reduce the colonisation of cariogenic bacteria. The complete or partial removal of dental plaque may arrest enamel demineralization and may even lead to the remineralisation of the surface enamel [1]. This is because dental plaque removal increase in contact area of surface enamel with saliva, this leads to an increases in the pH of the fluid surrounding the enamel surface and supplies mineral ions to enamel surface promoting the remineralisation of the apatite structure. Chemical control of dental plaque achieved using antibacterial solutions or gels such as chlorhexidine gluconate have proven to be very effective and are widely used by dentists for plaque control[58,59,60] The production of acids by metabolic reactions of bacteria in the biofilm can be reduced by either destroying or removing dental plaque and the control of one’s diet, particularly the ingestion of sugars (especially sucrose) and carbohydrates [2,49,53,54,55]. Reduction of sugar intake will reduce the rate of cariogenic metabolism and fermentation of sugars and reduce the production of acids which will reduce the rate and extent of demineralisation of teeth. Sugar can be replaced by sweeteners which are weakly metabolised or not metabolised by cariogenic bacteria. Sugar substitutes such as xylitol are highly effective at reducing the acid production since it is both non-cariogenic and anticariogenic. Xylitol prevents sucrose molecules from binding to Streptococcus mutans, inhibiting its metabolism and resulting in a bactericidal effect. It is also effective at reducing activity of other cariogenic bacteria, since many cariogenic bacteria metabolise sucrose, and significantly reduces the effects of acid erosion of teeth. [8,9,35,47] The last method to prevent dental caries is to increase the rate of remineralisation and decrease the rate of demineralisation of the enamel surface This is now widely achieved by the introduction of fluoride into the oral cavity and oral fluids in many forms, from fluoridated water to fluoride toothpaste to fluoride gels.
Fluoride The observation that fluoride prevents dental caries is well established, however, the exact mechanism of how and why this interaction occurs is still unknown. The stated ideal dosage of fluoride to prevent dental caries is between 0.05-0.07mg/kg/day[76,77] Suggested mechanisms have been split into two types, topical action and systemic action. Topical action is the action of fluoride on mature teeth, this is achieved by physical contact of fluoride with the enamel surface. Systemic action is the incorporation of fluoride into the tooth structure of pre-erupted teeth. The theory of systemic fluoride and its effectiveness in dental caries treatment has been largely dismissed and proven to be ineffective. [3,4,10,11,12]. Topical fluoride acts in 3 possible mechanisms: Inhibition of demineralization, enhancement of remineralisation and the inhibition of cariogenic enzymes and bacteria. [3,11,56] 75
Experimental Research
Fluoride enhances remineralisation by adsorbing to the enamel surface which attracts calcium and phosphate ions encouraging the formation of fluorapatite[56]. Fluoride inhibits demineralisation by being incorporated into the apatite structure forming fluorapatite (Ca10(PO4)6F2) which increases resistance of the enamel surface against acid dissolution and inhibiting enamel demineralisation[6,13,56,57]. The mechanism suggests that acid produced by cariogenic bacteria will dissolve hydroxyapatite and release calcium and phosphate ions into oral fluids. Remineralisation is achieved when the enamel surface is in contact with saliva, if fluoride ions are present in the fluid, fluoride will be incorporated into the apatite matrix forming fluorapatite crystals. Fluoride ions will replace the hydroxyl groups in hydroxyapatite during the reformation of apatite crystals due to the extremely high electronegativity of fluoride ions[41,42,57]. The resulting fluorapatite has very strong hydrogen bonds which makes fluorapatite very stable and less soluble in acid [41,42], the critical pH of fluorapatite is approximately 4.7[63]. Furthermore, fluorapatite crystals formed have a much more compact structure with much smaller pores and reduces the porosity of the enamel surface, this reduces the rate of demineralisation. Another mechanism in which fluoride inhibits demineralisation is the formation of calcium fluoride like material, this material will be referred to calcium fluoride. Topical fluoride introduced to saliva encourages the precipitation of calcium fluoride [46,62] since it is only produced when minerals in oral fluids react with ionic fluoride such as NaF at high concentrations and when fluoride in oral fluids are supersaturated with respect to the calcium fluoride [57,62]. Pure calcium fluoride does not form since phosphates, proteins and other ions precipitate on it, the adsorption of hydrogen phosphate ions stabilises the salt and makes it more resistant to acid attack[63]. When calcium fluoride is exposed to an acidic environment, fluoride ions are liberated from calcium fluoride since less hydrogen phosphate ions are present to stabilize the calcium fluoride making it more susceptible to acid dissolution[63]. Calcium fluoride is deposited in porosities of the enamel and acts as a fluoride reservoir when exposed to an acidic environment [61,63]. Some of the deposited calcium fluoride is converted to fluorapatite If the pH of oral fluids is sufficiently low. [57] Another suggested mechanism is that fluoride present in saliva and plaque fluids interact with plaque bacteria and inhibits the bacterial metabolism decreasing rate of demineralisation. Fluoride can inhibit the action of bacterial metabolism by acting as an enzyme inhibitor. Enolase is an enzyme which catalyzes the production of phosphoenolpyruvate, which is a precursor of lactic acid, from 2-phosphoglycerate, during glycolysis which is the first stage of respiration. Bacterium in the Biofilm use the phosphoenolpyruvate transport system to transport mono and disaccharides, such as sucrose, into the cytosol (aqueous component of cytoplasm). Fluoride inhibits the catalysing action of enolase[58,63,64], leading to the inhibition of the phosphoenolpyruvate transport system mediated uptake of saccharide substrates (sugars) by cariogenic bacteria. The latter will reduce the rate of glycolysis which will reduce the rate of metabolic functions and production of acids in dental plaque [14]. Fluoride also inhibits heme-based enzymes where fluoride ions bind to the enzymes and controls its rate of catalysis[14,64]. This occurs to heme containing enzymes with active sites which normally accommodate a hydroxide ion and a proton, the fluoride ion replaces the hydroxide ion on the active site[64]. Fluoride can also form metal-fluoride complexes such as AlF4which acts to mimic phosphate groups to form ADP complexes rather than ATP, which is required by proton-translocating F-ATPases [64]. This inhibits the action of F-ATPases leading to decreased rate of active transport of H+ ions out of the cell. Furthermore, fluoride acts to enhance membrane permeability of protons since hydrogen fluoride can pass directly through the cell membrane and 76
A Review of Use of Fluoride on Dental Caries Prevention transport protons into the cell membrane decreasing the cytoplasmic pH[56, 64]. Fluoride causes a net accumulation of protons in the cell causing cytoplasmic acidification[64] resulting in the inhibition of pH sensitive enzymes, many of which are required for essential metabolic reactions such as glycolysis, leading to cell death. However, some studies and reviews concluded that low levels of fluoride required to achieve dental caries prevention are not sufficient to cause antibacterial effects [11,64].
Fluoride Metabolism Fluoride metabolism involves 3 pH dependent metabolic processes: absorption, distribution and excretion. Ingested fluoride is present in one of 2 forms: undissociated hydrogen fluoride(HF) or ionic fluoride(F-). The proportion of the 2 forms of fluoride is highly pH dependent. At a pH of 3.4, the 2 forms are in equal proportions[65]. If pH drops below 3.4, the proportion fluoride in the undissociated form increases, if pH increases above 3.4, the proportion of fluoride in the ionic form increases [65]. Approximately 90% of ingested fluoride is absorbed from the gastrointestinal tract[17,56, 65] with 20-25% of the ingested dose of fluoride absorbed in the stomach[56,65]. The remainder of the ingested fluoride is absorbed through the small bowel[65,66]. A study by Whitford G.M.,1984) showed that when the pH in the stomach is 4, 22% of ingested fluoride is in the form of HF, when pH is 2, 95% of ingested fluoride is in the form of HF [17,21]. The same study also concluded that lower the gastric pH, the greater the rate of absorption [17,65,67]since the rate of fluoride absorption was 50% higher in a pH 2.1 buffer compared to a Ph 7.1 buffer [16,17]. The study demonstrated that HF is the main form fluoride absorbed by cells [65,66,70,74]and also showed that gastric pH plays a significant role in Fluoride absorption. HF is absorbed by cells through passive diffusion up the pH gradient from areas of low pH to areas of High pH, directly permeating the cell membrane [65,66,74]. This passive diffusion mechanism is very efficient and occurs very rapidly with peak levels of fluoride in blood plasma achieved within 20-60 minutes of ingestion[17,56,65,66]. Although the efficiency of fluoride absorption is very high, the level of retained fluoride by the organism is significantly lower. The percentage of fluoride which is retained seems to be significantly affected by age with only 36% and 55% of ingested fluoride retained in healthy adults (18-75 years) and children (<7 years) respectively[56,65]. Fluoride is distributed around the body mainly through blood plasma where fluoride is either deposited in mineralised tissue such as bone or removed from the bloodstream in the kidneys. About 99% of total fluoride retained in an organism is found in mineralised tissue, bone in particular[56,65,67]. Dental tissue also contributes to temporary fluoride retention but only to a diminished extent[65]. Absorbed fluoride in the blood stream is removed from blood plasma by urinary excretion and absorption into bones and teeth [66]. The main pathway of fluoride excretion is through urination, 45% to 60% of ingested fluoride is removed from the body by urinary excretion[65]. As result, patients with kidney disease or kidney dysfunction are at a higher risk of fluoride toxicity due to a decreased ability to process and filter fluoride from the blood plasma [18]. A Study suggested that 35-45% of the filtered fluoride is reabsorbed in the proximal tubule[16]. The mechanism of renal fluoride excretion involves glomerular filtration of fluoride in the Bowmanâ&#x20AC;&#x2122;s capsule followed by 77
Experimental Research the pH dependent reabsorption in the nephron tubules through the passive diffusion of HF [16]. This suggests that majority of excreted fluoride is in the ionic form. Furthermore, fluoride reabsorption is greater in an acidic glomerular filtrate than that of an alkali filtrate. In other words, fluoride excretion is greater in alkalosis than acidosis [16,19,65,67]. The remaining ingested fluoride which is not retained by the body is excreted as ionic fluoride in faeces[56, 65]
Effects of Excess Fluoride In this section, only the effects of ingested fluoride are discussed since it is the most likely cause of fluoride poisoning during dental caries prevention and treatment.
Acute Fluoride Toxicity Acute fluoride poisoning is mainly caused by ingestion of fluoride close to, equal or over the probably toxic dose of fluoride which causes systemic effects and symptoms. The probably toxic dose is a minimum dose which could cause toxic symptoms and requires immediate medical intervention and hospitalisation[70]. The estimated and widely regarded probably toxic dose of fluoride is 5mg/kg. [20,21,22,56,69,70]. The suggested lethal dose is 15mg/kg. However the value of the lethal dose is affected by multiple factors such as age, method of administration, influence of food substances on fluoride absorption, source of fluoride and body composition. As a result a range of lethal doses ranging from 7-16mg/kg has been cited in multiple studies [20]. A study conducted by Hodge and Smith, 1965, estimated the certainly lethal dose to be 32-64 mg/kg of fluoride in adults. [18,23,70,75]
The toxic effects of fluoride arise in 4 main mechanisms: corrosive action, inhibition of enzyme systems, reduction of calcium in blood plasma and the manifestation of an increasing electrolyte imbalance. [20] In low to moderate concentrations, HF is a weak acid and dissociates to form F- and H2F+ which causes gastric mucosa irritation[21]. In high concentrations, HF may cause disruption and damage to stomach lining and mucosa. This is due to the much higher acidity of HF in high concentrations since HF undergoes homoassociation in addition to auto-ionization in high concentrations. Homoassociation of HF involves reaction between HF and F- to produce (HF2-). Homoassociation reduces the concentration of (F-) produced in the auto-ionization of HF, shifting the ionization equilibrium towards the right. This causes an increased concentration of protonated ions increasing the pH of HF significantly leading to a much more acidic compound. Auto-Ionisation of HF: 2HF --> H2F+ + FHomoassociation of HF: HF + F- --> HF2Overall: 3HF --> HF2- + H2F+ The systemic effect of fluoride is determined by the concentration and not the amount of fluoride ingested[21]. The higher the concentration of fluoride ingested the greater the extent of HF dissociation in the stomach leading to greater acidity. This increases the corrosiveness of HF causing more damage with increasing rapidly in the stomach. A study conducted on the effect of fluoride on structure and function of canine gastric mucosa [25] concluded that the threshold concentration of fluoride where structure and function of mucosa was affected is 1mmol/liter and the concentration where maximum effects were observed was 10mmol/litre. Notable observations of effects of fluoride on stomach were the thinning of surface cell layer, localized exfoliation and 78
A Review of Use of Fluoride on Dental Caries Prevention necrosis of surface cells, acute gastritis (inflammation or swelling in stomach lining) and edema (abnormal accumulation of fluid in body tissues).[25] Common symptoms of stomach lining damage and gastric irritation include: vomiting, haemorrhage and diarrhea.[75] Fluoride can inhibit a multitude of enzymes such as Na+-K+-ATPase and carbohydrate metabolising enzymes. Fluoride inhibits the actions of these enzymes by reacting with the necessary metallic ions, resulting in the decrease of available metallic ions which are required. Due to the inhibiting action of fluoride on enzymes, essential metabolic functions such as glycolysis and respiration of the cell are inhibited resulting in cell death[68,73,74,75]. High levels of fluoride in blood plasma will lead to a decrease in concentration of calcium in the bloodstream potentially causing hypocalcemia [56,68,74,75]. This is because calcium ions have very high affinity for fluoride and will bind to fluoride in blood plasma decreasing the concentration of calcium ions in blood plasma by forming insoluble complexes, causing hypocalcemia[68,72,73]. Furthermore, fluoride is highly electronegative and reactive, adding to the increased rapidity of hypocalcaemia development. As a result, patients diagnosed with severe acute fluoride poisoning are at a high risk of hypocalcaemia. Hypocalcemia leads to decreased nerve function since calcium ions are required in the process of impulse transmission across synapses. As a result, hypocalcemia could potentially cause muscular and neurological conditions including seizures, convulsions and tetanus[23,24,73,75]. However, fluorid induced hypocalcemia has also been observed to cause other life threatening complications such as cardiac arrhythmias[23,24,73]. On the contrary, other studies have shown that cardiac failure is not due to hypocalcemia as previously believed[71,72]. These studies suggest that fluoride induced hyperkalemia may be the cause of observed mortality of patients with acute fluoride poisoning by cardiac death[71,72]. Hyperkalemia is the increased concentration of potassium ions in the blood stream. The studies observed that hyperkalemia was observed shortly before lethal ventricular arrhythmias.[56,71,72,73,75] The source of fluoride can affect the immediate toxic effects, fluoride is more toxic if it is in a soluble form. Highly insoluble sources of fluoride such as calcium fluoride, cryolite, hydroxyfluorapatite and fluorapatite are usually very poorly absorbed in the gastrointestinal tract and are rarely the cause of acute fluoride toxicity. However highly soluble fluoride sources including sodium fluoride, fluorosillic acid, sodium fluorosilicate and sodium mono-fluorophosphate are more likely to be toxic if ingested orally [21,23]. When sodium fluoride is ingested, it rapidly dissociates into sodium and fluoride ions in the stomach. Sodium mono-fluorophosphate dissociates very slowly in the stomach into sodium and monofluorophosphate ions. The latter ion is mainly absorbed in the upper intestine into the blood stream where it is hydrolysed by alkaline phosphatase into fluoride and phosphate ions [26]
79
Experimental Research
Figure 12: Systemic conditions of acute fluoride poisoning assuming the source of fluoride is Sodium Fluoride [75]
Treatment of Acute Fluoride Toxicity Acute fluoride poisoning is treated by reducing the absorption of fluoride via the gastrointestinal tract, clinical management of fluoride levels in the blood stream and increasing the rate of urinary excretion of fluoride[68]. General procedures conducted on the patient include the constant monitoring of levels of fluoride, calcium, potassium in blood plasma and blood pH is essential for providing clinical information required to treat the patient [21,69]. ECG of the patient must be established quickly to monitor and observe heart rate of the patient to establish the potential of ventricular arrhythmias [75]. Furthermore, an intravenous line must be established immediately and blood samples should be extracted and sent for testing to establish concentration of fluoride, calcium, potassium in blood plasma and pH of blood plasma [21,69].The time of the intubation of the patient depends on whether or not induced vomiting is required or not, if induced vomiting is suitable, vomiting should be induced before intubation. In life-threatening cases of fluoride poisoning, vomiting should be induced to reduce the concentration of fluoride present in the stomach and reduce further absorption through the gastrointestinal tract [21,69]. Induced vomiting is not advised if patient is unconscious to prevent tracheal blockage. Oral administration of 1% calcium chloride or calcium gluconate tablets or solutions [21,65,69,75] reduces the fluoride absorption from the gastrointestinal tract by binding to fluoride ions producing an insoluble complex, reducing the concentration of absorbable fluoride. Management of intravenous fluoride and hypocalcemia are managed by the administration
80
A Review of Use of Fluoride on Dental Caries Prevention of 1% calcium chloride or calcium gluconate tablets or solutions. [21,65,69,75]. Calcium chloride iz normally preferred due to a higher concentration of elemental calcium present; 272 mg of elemental calcium in 10 ml of 10% calcium chloride solution, providing a more rapid increase in calcium ion levels in plasma [21, 24]. Intravenous calcium increases calcium levels in blood, which are severe depleted, and also reduces the absorption of fluoride. In cases of acidosis, sodium lactate or sodium bicarbonate should be administered to decrease the concentration of readily absorbable HF, increase the renal excretion of fluoride and neutralise the acidic pH of blood plasma[21,69,75]. Alkalinization of blood plasma plays a significant role in promoting the movement of fluoride out of cells since fluoride moves up the pH gradient, from Low pH to high pH[65]. Fluoride induced hyperkalemia cannot be prevented with glucose, insulin or bicarbonate[71]. Treatment of fluoride induced hyperkalemia is to use quinidine sulfate [72]. Furthermore, hemodialysis may be used to further increase the rate of fluoride removal from the blood plasma[75].
Dental Fluorosis Dental Fluorosis is discolouration and increased porosity of teeth caused by the the hyporemineralisation of dental enamel. It occurs through the chronic exposure of enamel to an excess of fluoride during teeth formation, leading to the development of teeth with decreased mineral content and increased porosity [42,77]. Fluorosis is achieved if the safe dose of fluoride (0.05-0.07mg/ Kg/day) is exceeded [77]. It has been proposed that dental fluorosis occurs if teeth are exposed to a sustained, excessive amount of fluoride within a critical period known as the window of susceptibility. The window of susceptibility is approximately within the age of 15-30 months[27,76,77,80] with the critical period between 1 and 4 years old, children beyond the age of eight are normally not at risk of developing fluorosis[77]. The risk of fluorosis is increased through the use of 4 sources of fluoride: fluoridated water, fluoride supplements, topical fluoride and prescribed fluoride. Fluoridated water alone is responsible for 40% of cases of fluorosis in the united states [77]. Mild fluorosis is characterised by bilateral white opaque lesions that run horizontally across the enamel surface, these lesions may join to form a white patch [77]. However, lesions may not appear bilateral due to the loss of mineral on the enamel surface[78]. Diagnosis of mild fluorosis very difficult since the White opaque lesion looks similar to other conditions such as white spot lesions, amelogenesis imperfecta, dentinogenesis imperfecta and tetracycline stains [78]. In more severe forms of fluorosis, the enamel surface may be pitted and/or discoloured[77,79], the discolouration is yellow or brown[78,79] . Upon the eruption of fluorosed teeth, the enamel is not discoloured, the discolouration is caused by the accumulation of metal ions such as iron and copper on the abnormally porous enamel[77]. Dental fluorosis normally causes the development of lesions in multiple teeth since exposure to excess fluoride is rarely specific to a single tooth. The classification of dental fluorosis is very important since the treatment and management of fluorosis is highly dependent on the aesthetic properties of the lesion. The most widely used fluorosis classification index is the Thylstrup Fejerskov Index (TFI) which uses a scale of 1-9 with higher scores indicating greater severity[78]. This index classifies lesions into the following categories: mild (TFI=1-3), moderate (TFI=4-5) and severe (TFI=6-9).[77]
81
Experimental Research
Figure 13: The clinical criteria and scoring for the Thylstrup Fejerskov Index for Dental Fluorosis [78]
Figures 14 (top left), 15 (top right), 16 (middle left), 17 (middle right), 18 (bottom left) and 19 (bottom right) show examples of dentitions with differing severity of fluorosis and their corresponding TFI scores [78] 82
A Review of Use of Fluoride on Dental Caries Prevention
Treatment and Prevention of Fluorosis Dental fluorosis can be aesthetically treated in clinical methods, however for long term treatment, it is advised that the patient should control their fluoride intake by avoiding the ingestion of fluoridated water and reduce use of dental products with a high fluoride concentration. Clinical management of dental fluorosis depends on the TFI index of the lesion. Lesions with TFI 1-2 and TFI 1-4 can be treated conservatively by bleaching and/or microabrasion respectively. It has been observed that initial microabrasion followed by bleaching allowed for better aesthetic result. Composite resin and restoratives can also be used to manage discoloured TFI 1-3 lesions. For TFI â&#x2030;Ľ5, composite restoratives or aesthetic veneers should be used to manage the lesion. For severely discoloured lesions of TFI 8-9, the use of prosthetic crowns should be considered depending on the preferences of the patient and the location of the lesion [21,77,78].
Conclusion In conclusion, the current scientific evidence strongly supports the use fluoride to prevent dental caries. There is an overwhelming number of studies which support the effectiveness of fluoride in dental caries treatment and prevention. Current scientific advances has further increased our understanding of how fluoride prevents dental caries, the factors and processes involved in dental caries development and the biological and chemical mechanisms involved. However, more research is required to confirm the relevant suggested mechanisms and theories of the effects of fluoride on multiple aspects on dental caries since we are currently uncertain of the exact mechanisms. Further understanding of the true biological and chemical mechanisms of dental caries can help to improve the managment and prevention of dental caries. Currently, fluoride is used extensively to prevent dental caries and there are regulations to control the amount of fluoride in fluoride containing dental products in certain countries like the United States of America and countries in the European Union. It is imperative that governments and health care organisation apply suitable regulations on safe fluoride levels be appropriately applied in both a clinical and domestic environment as a breach can lead to both acute and chronic consequences to both patients and their families. Furthermore, there should be greater emphasis towards the education of the public about fluoride use, its benefits and the potential hazards, especially to parents of children. This can be achieved through the dentist or even public oral health campaigns, dental caries education programs and dental caries fundraising charity events. In the United States, some schools provide oral education programs organised by both public and private organisations. oral education at school is a viable method to educate students on how to prevent dental caries although there needs to be considerable discussion of when this education should be put into place ie at what age will this education program be most beneficial to students. Although the use of fluorides has led to a significant decrease in the prevalence of dental caries around the world, dental caries prevention and treatment is not solely achieved by fluoride use. In developed countries, such as the United States, there is a huge need to educate the population regarding the effects of dental caries and how to prevent the development of dental caries. More resources should be put in place to help patients control their diet, the modern diet plays a significant role in the prevalence of dental caries and greater action needs to be put in to patient education. Improvements in dietary habits of local populations can greatly decrease the frequency of dental caries development in a community. In addition, dental caries is a disease still widely prevalent in developing countries such as India and Bangladesh, these countries should direct greater efforts into community dentistry which could play a significant role in controling the extent of dental 83
Experimental Research caries development. Clinically, further investigation into fluoridated dental fillings and implants should be undertaken to improve the oral health and end result of the restoration. A possible further research project for me may be to investigate the fluoride releasing properties of dental restorative composites so that their anti caries effectiveness may be assessed. If the effectiveness of current fluoridated dental restoratives are sub-par, I may go into researching and producing a new more effective fluoride releasing dental restorative.
References 1. Fejerskov, Ole, and Edwina Kidd. “Defining the Disease: an Introduction.” Dental Caries: The Disease and Its Clinical Management, 2nd ed., Blackwell, 2008, pp. 4-20. 2. Decker, Riva Touger, and Cor Van Loveren. “Sugars and Dental Caries.” The American Journal of Clinical Nutrition, vol. 78, no. 4, 1 Oct. 2003, pp. 881-892. 3. Featherstone, J D. “Prevention and Reversal of Dental Caries: Role of Low Level Flouride.” Community Dentistry and Oral Epidemiology, vol. 27, Feb. 1999, pp. 31-40. 4. Koo, H. “Strategies to Enhance the Biological Effects of Fluoride on Dental Biofilms.” Advances in Dental Research, vol. 20, no. 1, July 2008, pp. 17-21. 5. Karpinski, Tomasz M., and Anna K. Szkaradkiewicz. “Microbiology of Dental Caries.” Journal of Biology and Earth Sciences, vol. 3, no. 1, 2013, pp. 21-24. 6. Neel, E A. A., et al. “Demineralization-Remineralization Dynamics in Teeth and Bone.” International Journal of Nanomedicine, vol. 11, Sept. 2016, pp. 4743-4763. 7. Meurman, J H, and J M Cate. “Pathogenesis and Modifying Factors of Dental Erosion.” European Journal of Oral Sciences, vol. 104, Apr. 1996, pp. 199-206. 8. Lee, Yoon. “Diagnosis and Prevention Strategies for Dental Caries.” Journal of Lifestyle Medicine, vol. 3, no. 2, Sept. 2013, pp. 107-109. 9. L, Trahan. “Xylitol: a Review of Its Action on Mutans Streptococci and Dental Plaque-Its Clinical Significance.” International Dental Journal, vol. 45, Feb. 1995, pp. 77-92. 10. Thylstrup, A. “Clinical Evidence of the Role of Pre-Eruptive Fluoride in Caries Prevention.” Journal of Dental Research, vol. 69, no. 2, Feb. 1990, pp. 742-750. 11. Lynch, R JM, et al. “Low-Levels of Fluoride in Plaque and Saliva and Their Effects on the Demineralisation and Remineralisation of Enamel; Role of Fluoride Toothpastes.” International Dental Journal, vol. 54, no. S5, Oct. 2004, pp. 304-309. 12. Beltran, Eugenio D, and Brian A Burt. “The Pre and Post-Eruptive Effects of Fluoride in the Caries Decline.” Journal of Public Health Dentistry, vol. 48, no. 4, Dec. 1988, pp. 233-240. 13. Legeros, R Z. “Chemical and Crystallographic Events in the Caries Process.” Journal of Dental Research, vol. 69, no. 2, Feb. 1990, pp. 567-574. 14. Marquis, Robert E. “Antimicrobial Actions of Fluoride for Oral Bacteria.” Canadian Journal of Microbiology, vol. 41, no. 11, 1995, pp. 955-964 15. Sturr, Michael G, and Marquis, Robert E. “Inhibition of Proton-Translocating ATPases of Streptococcus Mutans and Lactobacillus Casei by Fluoride and Aluminium.” Archives of Microbiology, vol. 155, no. 1, Dec. 1990, pp. 22-27. 16. Whitford, G M, et al. “Fluoride Renal Clearance: a PH-Dependent Event.” The American Journal of Physiology, vol. 230, no. 2, Feb. 1976, pp. 527-532. 17. Whitford, G M, and D H Pashley. “Fluoride Absorption: The Influence of Gastric Acidity.” Calcified Tissue International, vol. 36, no. 3, May 1984, pp. 302-307 18. United States, Office of Health and Environmental Assessment, Environmental Criteria and Assessment Office, and Kathleen M. Thiessen. “Summary Review of Health Effects Associated with Hydrogen Fluoride and Related Compounds.” , 1988. 19. Ekstrand, J, et al. “Fluoride Pharmacokinetics During Acid-Base Balance Changes in Man.” European Journal of Clinical Pharmacology, vol. 18, no. 2, Mar. 1980, pp. 189-194 20. E. Angeles, Mart√≠nez-Mier. “Fluoride Its Metabolism, Toxicity, and Role in Dental Health .” Journal of EvidenceBased Integrative Medicine, vol. 17, no. 1, 4 Dec. 2011, pp. 28-32. 21. Whitford, G.M. “Acute Toxicity of Ingested Fluoride.” Fluoride and the Oral Environment, edited by Mari√ålia Afonso Rabelo. Buzalaf, vol. 22, Karger, 2011, pp. 66-80.) 22. Whitford, G.M. “The Physiological and Toxicological Characteristics of Fluoride .” Journal of Dental Research, vol. 69, no. 2, 1 Feb. 1990, pp. 539-549. 23. U.S. Department of Health and Human Services, et al. “Toxicological Profile for Fluorides, Hydrogen Fluoride, and Fluorine.”, Agency for Toxic Substances and Disease Registry, Sept. 2003 24. Suneja, Manish, and Heather A. Muster. “Hypocalcemia Treatment & Management.” Edited by Eleanor Lederer and Vecihi Batuman, Hypocalcemia Treatment & Management: Approach Considerations, Mild Hypocalcemia, Severe
84
A Review of Use of Fluoride on Dental Caries Prevention Hypocalcemia, Medscape, 23 Jan. 2018 25. Whitford, G.M, et al. “Effects of Fluoride on Structure and Function of Canine Gastric Mucosa.” Digestive Diseases and Sciences, vol. 42, no. 10, Oct. 1997, pp. 2146-2155. 26. Aguilar, F., et al. “Sodium Monofluorophosphate as a Source of Fluoride Added for Nutritional Purposes to Food Supplements.” The EFSA Journal, vol. 886, 2008, pp. 1-18. 27. Hong, L, et al. “Timing of Fluoride Intake in Relation to Development of Fluorosis on Maxillary Central Incisors.” Community Dentistry and Oral Epidemiology, vol. 34, no. 4, Aug. 2006, pp. 299-309. 28. Hiremath, S. S. “Epidemiology of Dental Caries.” Textbook of Preventive and Community Dentistry, 2nd ed., ELSEVIER INDIA, 2011, p. 141. 29. Marya, C. M. A Textbook of Public Health Dentistry. Jaypee Brothers Medical Publishers, 2011. 30. “The Story of Fluoridation.” National Institute of Dental and Craniofacial Research, U.S. Department of Health and Human Services, www.nidcr.nih.gov/health-info/fluoride/the-story-of-fluoridation. 31. Liljemark, W. F., and C. Bloomquist. “Human Oral Microbial Ecology and Dental Caries and Periodontal Diseases.” Critical Reviews in Oral Biology and Medicine, vol. 7, no. 2, Apr. 1996, pp. 180-198. 32. Byun, Roy, et al. “Quantitative Analysis of Diverse Lactobacillus Species Present in Advanced Dental Caries.” Journal of Clinical Microbiology, vol. 42, no. 7, July 2004, pp. 3128-3136. 33. Takahashi, N., and B. Nyvad. “The Role of Bacteria in the Caries Process.” Journal of Dental Research, vol. 90, no. 3, Oct. 2010, pp. 294-303. 34. Marsh, Philip D. “Dental Plaque as a Biofilm and a Microbial Community-Implications for Health and Disease.” BMC Oral Health, vol. 6, no. 1, 10 July 2006. 35. Maguire, A, and A J Rugg-Gunn. “Xylitol and Caries Prevention-Is It a Magic Bullet?”British Dental Journal, vol. 194, 26 Apr. 2003, pp. 429-436. 36. Marsh, P D. “Microbiological Aspects of Dental Plaque and Dental Caries.” Dental Clinics of North America, vol. 43, no. 4, 1 Oct. 1999, pp. 599-614. 37. Brailsford, S. R., et al. “The Predominant Aciduric Microflora of Root Caries Lesions.” Journal of Dental Research, vol. 80, no. 9, Sept. 2001, pp. 1828-1833. 38. Struzycka, Izabela. “The Oral Microbiome in Dental Caries.” Polish Journal of Microbiology, vol. 63, no. 2, 2014, pp. 127-135. 39. Selvig, Knut A. “The Crystal Structure of Hydroxyapatite in Dental Enamel as Seen with the Electron Microscope.” Journal of Ultrastructure Research, vol. 41, no. 3-4, Nov. 1972, pp. 369-375. 40. Holmen, L., et al. “A Scanning Electron Microscopic Study of Progressive Stages of Enamel Caries In Vivo.” Caries Research, vol. 19, no. 4, 1985, pp. 355-367. 41. Robinson, C., et al. “The Chemistry of Enamel Caries.” Critical Reviews in Oral Biology and Medicine, vol. 11, no. 4, Oct. 2000, pp. 481-495. 42. Simmer, J. P., and A. G. Fincham. “Molecular Mechanisms of Dental Enamel Formation.” Critical Reviews in Oral Biology and Medicine, vol. 6, no. 2, Apr. 1995, pp. 84-108. 43. Featherstone, J. D.B., and Adrian Lussi. “Understanding the Chemistry of Dental Erosion.” Monographs in Oral Science, vol. 20, 2006, pp. 66-76. 44. Bartlett, D. W., et al. “The Relationship between Gastro-Oesophageal Reflux Disease and Dental Erosion.” Journal of Oral Rehabilitation, vol. 23, no. 5, May 1996, pp. 289-297. 45. Schroeder, Patrick l., et al. “Dental Erosion and Acid Reflux Disease.” Annals of Internal Medicine, vol. 122, 1995, pp. 809-815. 46. Magalhaes, A. C., et al. “Fluoride in Dental Erosion.” Monographs in Oral Science, vol. 22, 2011, pp. 158-170. 47. Amaechia, B. T., et al. “The Influence of Xylitol and Fluoride on Dental Erosion.” Archives of Oral Biology, vol. 43, no. 2, Apr. 1998, pp. 157-161. 48. Lussi, A., et al. “Dental Erosion- An Overview with Emphasis on Chemical and Histopathological Aspects.” Caries Research, vol. 45, May 2011, pp. 2-12. 49. Aires, C. P., and C. P.M. Tabchoury. “Effect of Sucrose Concentration on Dental Biofilm Formed in Situ and on Enamel Demineralisation.” Caries Research, vol. 40, 2006, pp. 28-32. 50. Dowd, F. J. “Saliva and Dental Caries.” Dental Clinics of North America, vol. 43, no. 4, Oct. 1999, pp. 579-597. 51. Edwards, Paul C, and Preetha Kanjirath. “Recognition and Management of Common Acute Conditions of the Oral Cavity Resulting From Tooth Decay, Periodontal Disease and Trauma: An Update for a Family Physician.” Journal of the American Board of Family Medicine, vol. 23, no. 3, 2010, pp. 285-294. 52. Siqueira, Jose F., and Isabela N. Rocas. “Microbiology and Treatment of Acute Apical Abscesses.” Clinical Mircobiology Reviews, vol. 26, no. 2, Apr. 2013, pp. 255-273. 53. Marinho, Valeria C.C., and Helen V. Worthington. “Fluoride Mouth Rinses for Preventing Dental Caries in Children and Adolescents.” Cochrane Database of Systematic Reviews, no. 7, 2016. 54. Burt, B. A., and S. A. Eklund. “The Effects of Sugar Intake and Frequency of Ingestion on Dental Caries Increment in a Three-Year Longitudinal Study.” Journal of Dental Research, vol. 67, no. 11, Nov. 1988, pp. 1422-1429. 55. Sheiham, A., and W. P.T. James. “Diet and Dental Caries: The Pivotal Role of Free Sugars Reemphasized.” Journal of
85
Experimental Research Dental Research, vol. 94, no. 10, Aug. 2015, pp. 1341-1347. 56. Kanduti, Domen, et al. “Fluoride: A Review of Use and Effects on Health.” MateriaSocioMedica, vol. 28, no. 2, Mar. 2016, pp. 133-137. 57. Cate, J. M., and J. D.B. Featherstone. “Mechanistic Aspects of the Interactions between Fluoride and Dental Enamel.” Critical Reviews in Oral Biology and Medicine, vol. 2, no. 2, July 1991, pp. 283-296. 58. Featherstone, J. D.B. “The Science and Practice of Caries Prevention.” The Journal of the American Dental Association, vol. 131, no. 7, July 2000, pp. 887-899. 59. Addy, Martin, and John M. Moran. “Clinical Indications for the Use of Chemical Adjuncts to Plaque Control: Chlorhexidine Formulations.” Periodontology 2000, vol. 15, 1997, pp. 52-54. 60. Van Strydonck, Danielle A.C., and Dagmar E. Slot. “Effect of a Chlorhexidine Mouthrinse on Plaque, Gingival Inflammation and Staining in Gingivitis Patients: a Systematic Review.” Journal of Clinical Periodontology, vol. 39, no. 11, Nov. 2012, pp. 1042-1055. 61. Ogaard, B., and L. Seppa. “Professional Topical Fluoride Applications-Clinical Efficacy and Mechanism of Action.” Advances in Dental Research, vol. 8, no. 2, July 1994, pp. 190-201. 62. Fejerskov, O., and M. J. Larsen. “Dental Tissue Effects of Fluoride.” Advances in Dental Research, vol. 8, no. 1, June 1994, pp. 15-31. 63. Lussi, Adrian, and Elmar Hellwig. “Fluorides-Mode of Action and Recommendations for Use.” Schweizer Monatsschrift Fur Zahnmedizin, vol. 122, no. 11, 2012, pp. 1030-1042. 64. Marquis, Robert E., and Sarah A. Clock. “Fluoride and Organic Weak Acids as Modulators of Microbial Physiology.” FEMS Microbiology Reviews, vol. 26, no. 5, Jan. 2003, pp. 493-510. 65. Buzalaf, Camila P., and Aline D.L. Leite. “Fluoride Metabolism.” Fluorine: Chemistry, Analysis, Function and Effects, Royal Society of Chemistry, 2015, pp. 54-74. 66. Buzalaf, M. A., and G. M. Whitford. “Fluoride Metabolism.” Monographs in Oral Science, vol. 22, June 2011, pp. 2036. 67. Whitford, G. M. “Intake and Metabolism of Fluoride.” Advances in Dental Research, vol. 8, no. 1, June 1994, pp. 5-14. 68. Mclvor, Micheal E. “Acute Fluoride Toxicity.” Drug Safety, vol. 5, no. 2, Mar. 1990, pp. 79-85. 69. Whitford, G. M. “Acute Fluoride Toxicity.” Monographs in Oral Science, vol. 16, 1996, pp. 112-136. 70. Whitford, G. M. “Acute and Chronic Fluoride Toxicity.” Journal of Dental Research, vol. 71, no. 5, May 1992, pp. 1249-1254. 71. Mclvor, Micheal E., and Charles E. Cummings. “Sudden Cardiac Death from Acute Fluoride Intoxication: The Role of Pottassium .” Annals of Emergency Medicine, vol. 16, no. 7, July 1987, pp. 777-781. 72. Cummings, Charles C., and Micheal E. Mclvor. “Fluoride-Induced Hyperkalemia: The Role of Ca2+-Dependent K+ Channels.” The American Journal of Emergency Medicine, vol. 6, no. 2, Jan. 1988, pp. 1-3. 73. Baltazar, Romulo F., and Morton M. Mower. “Acute Fluoride Poisoning Leading to Fatal Hyperkalemia.” Chest Journal, vol. 78, no. 4, Oct. 1980, pp. 660-663. 74. Barbier, Olivier, et al. “Molecular Mechanisms of Fluoride Toxicity.” Chemico-Biological Interactions, vol. 188, no. 2, Nov. 2010, pp. 319-333. 75. Smith, Frank A. “Fluoride Toxicity.” Handbook of Hazardous Materials, edited by Morton Corn, Academic Press Inc, 1993, pp. 277-283. 76. Levy, Steven M. “An Update on Fluorides and Fluorosis.” Journal of the Canadian Dental Association, vol. 69, no. 5, 2003, pp. 286-291. 77. Alvarez, Jenny A., and Karla M.P.C. Rezende. “Dental Fluorosis: Exposure, Prevention and Management.” Journal of Clinical and Experimental Dentistry, vol. 1, no. 1, 2009, pp. 14-18. 78. Cavalheiro, Jessica P. et al. “Clinical Aspects of Dental Fluorosis According to Histological Features: a Thylstrup Fejerskov Index Review.” CES Odontologia, vol. 30, no. 1, June 2017, pp. 41-50. 79. Moller, I. J. “Fluorides and Dental Fluorosis.” International Dental Journal, vol. 32, no. 2, June 1982, pp. 135-147. 80. Browne, Deirdre, et al. “Fluoride Metabolism and Fluorosis.” Journal of Dentistry, vol. 33, no. 3, Mar. 2005, pp. 177186.
Figures and Tables 1. Fejerskov, Ole, and Edwina Kidd. “Defining the Disease: an Introduction.” Dental Caries: The Disease and Its Clinical Management, 2nd ed., Blackwell, 2008, pp. 17. 2. Fejerskov, Ole, and Edwina Kidd. “Defining the Disease: an Introduction.” Dental Caries: The Disease and Its Clinical Management, 2nd ed., Blackwell, 2008, pp. 17. 3. Fejerskov, Ole, and Edwina Kidd. “Defining the Disease: an Introduction.” Dental Caries: The Disease and Its Clinical Management, 2nd ed., Blackwell, 2008, pp. 17. 4. Fejerskov, Ole, and Edwina Kidd. “Defining the Disease: an Introduction.” Dental Caries: The Disease and Its Clinical Management, 2nd ed., Blackwell, 2008, pp. 17. 5. Fejerskov, Ole, and Edwina Kidd. “Defining the Disease: an Introduction.” Dental Caries: The Disease and Its Clinical Management, 2nd ed., Blackwell, 2008, pp. 13.
86
A Review of Use of Fluoride on Dental Caries Prevention 6. Fejerskov, Ole, and Edwina Kidd. “Defining the Disease: an Introduction.” Dental Caries: The Disease and Its Clinical Management, 2nd ed., Blackwell, 2008, pp. 13. 7. Fejerskov, Ole, and Edwina Kidd. “Defining the Disease: an Introduction.” Dental Caries: The Disease and Its Clinical Management, 2nd ed., Blackwell, 2008, pp. 11. 8. Fejerskov, Ole, and Edwina Kidd. “Defining the Disease: an Introduction.” Dental Caries: The Disease and Its Clinical Management, 2nd ed., Blackwell, 2008, pp. 11. 9. Fejerskov, Ole, and Edwina Kidd. “Defining the Disease: an Introduction.” Dental Caries: The Disease and Its Clinical Management, 2nd ed., Blackwell, 2008, pp. 11. 10. Fejerskov, Ole, and Edwina Kidd. “Defining the Disease: an Introduction.” Dental Caries: The Disease and Its Clinical Management, 2nd ed., Blackwell, 2008, pp. 14. 11. Fejerskov, Ole, and Edwina Kidd. “Defining the Disease: an Introduction.” Dental Caries: The Disease and Its Clinical Management, 2nd ed., Blackwell, 2008, pp. 10. 12. Smith, Frank A. “Fluoride Toxicity.” Handbook of Hazardous Materials, edited by Morton Corn, Academic Press Inc, 1993, pp. 279. 13. Cavalheiro, Jessica P. et al. “Clinical Aspects of Dental Fluorosis According to Histological Features: a Thylstrup Fejerskov Index Review.” CES Odontologia, vol. 30, no. 1, June 2017, pp. 43. 14. Cavalheiro, Jessica P. et al. “Clinical Aspects of Dental Fluorosis According to Histological Features: a Thylstrup Fejerskov Index Review.” CES Odontologia, vol. 30, no. 1, June 2017, pp. 44. 15. Cavalheiro, Jessica P. et al. “Clinical Aspects of Dental Fluorosis According to Histological Features: a Thylstrup Fejerskov Index Review.” CES Odontologia, vol. 30, no. 1, June 2017, pp. 44. 16. Cavalheiro, Jessica P. et al. “Clinical Aspects of Dental Fluorosis According to Histological Features: a Thylstrup Fejerskov Index Review.” CES Odontologia, vol. 30, no. 1, June 2017, pp. 44. 17. Cavalheiro, Jessica P. et al. “Clinical Aspects of Dental Fluorosis According to Histological Features: a Thylstrup Fejerskov Index Review.” CES Odontologia, vol. 30, no. 1, June 2017, pp. 44. 18. Cavalheiro, Jessica P. et al. “Clinical Aspects of Dental Fluorosis According to Histological Features: a Thylstrup Fejerskov Index Review.” CES Odontologia, vol. 30, no. 1, June 2017, pp. 44. 19. Cavalheiro, Jessica P. et al. “Clinical Aspects of Dental Fluorosis According to Histological Features: a Thylstrup Fejerskov Index Review.” CES Odontologia, vol. 30, no. 1, June 2017, pp. 44.
87
Experimental Research
Environmentally-Friendly Methods of Synthesising Azo Dyes An investigation into their effectiveness
Greg Chu (Y13), Supervised by Lotje Smith
Abstract Azo dyes are considered as the most commonly used dyes in modern day industry. However, the creation of azo dyes such as para-red employs a non-environmental friendly procedure. It requires low temperature and creates copious amounts of waste which are potentially carcinogenic. Substantial research has been done to tackle the problem, using different routes of synthesis such as the use of supercritical carbon dioxide, nanoparticles, magnetic solid, salicylic acid derivatives, and yeast as solid catalysts. These methods have been tested to be feasible, but there are problems that prevent them from being commercially viable. Recently, Manuel I. Velasco, Claudio O. Kinen, Rita Hoyos de Rossi, Laura I. Rossi (September 2011) [9] proposed a new method that is modified from the traditional method, mainly by using different solvents to synthesise azo dyes. The method was proven to be feasible, resulting in no waste in the process but the azo dyes produced has not been compared with those formed from the traditional method. This investigation aims to create azo dyes with this new method and with the traditional method suggested by the Chinese University of Hong Kong (CUHK) and the University of New Brunswick (UNB). The azo dyes produced by the traditional method and the new methods will then be compared in terms of three different aspects: colour saturation, percentage yield and degradability. The result shows that the new method using concentrated hydrochloric acid with ferric nitrate or concentrated nitric acid as solvents to create azo dyes leaves no waste material, which means that it has 100% atom economy. It has also excellent flexibility in combining all the tested reagents, specifically involving 3-nitroaniline, which was not successful using the traditional approach. In addition, the degradability and colour saturation of dyes made by this method was only slightly worse than dyes made via the traditional method. Although the cost of this method is higher, the advantages for the environment poses it a high potential of being used commercially as the standard approach to develop azo dyes in the future.
Introduction Dyes are natural or synthetic substances which give colour. There are four main reasons that cause dyes to emit colour. Firstly, it contains at least one chromophore, which is a colour-bearing group. Secondly, it has a conjugate system, where there is an alternation between single bonds and double bonds. Thirdly, it exhibits resonance of electrons. Lastly, it absorbs light at the visible spectrum by the n-delocalisation of the electrons in the benzene ring. Azo dye is the most commonly used dye. It accounts for 70% of all dyes used around the world [1] . It is predominantly utilized in the textile and food industry [2]. It is a popular commercial dye because it is cheap and easy to manufacture in large quantities. It is also very long-lasting, with a 20 year [15] usage time. It is also very difficult to remove. However, azo dyes also have disadvantages. Firstly, some forms of azo dyes, such as dinitroaniline orange, is carcinogenic. Secondly, leftover solvents and waste materials produced from the synthesis of azo dyes are deleterious to the environment. Since the industry is enormous, the harmful effects of waste products are magnified, especially in developing countries which have the highest production of dyes, but with a low level of awareness about the damages, they cause to the environment. To protect the environment, interventions should be made to regulate the disposal of waste products, such as through laws and education, though this is not the most effective way. A better solution will be to explore new ways of synthesising azo dyes that will cause less harm than the traditional method so as to make the 88
Environmentally-Friendly Methods of Synthesising Azo Dyes industry more sustainable. There have been studies to explore different eco-friendly methods of synthesising azo dyes. One method is to use supercritical carbon dioxide, which reduces mineral acid as a proton source. Synthesis of azo dyes in supercritical carbon dioxide is dependent on three factors: basicity, solubility and steric hindrance. However, supercritical carbon dioxide is difficult to produce and unfeasible in under developed countries where there is a lack of suitable equipment for production of supercritical CO2. Other methods include the use of nanoparticles [3][4] or the use of a magnetic solid as a catalyst so that the production of azo dyes need not to be carried out at low temperatures [5] or the use of salicylic acid, yeast and a solid acid as a catalyst to speed up the reaction. [6][7] In addition, a green method using methyl nitrite [8] has also been shown to success. However, all of these materials are difficult to source. Velasco, Kinen, de Rossi and Rossi [9] modified the traditional method mainly by using different solvents to synthesize azo dyes. The method was proven to be feasible, resulting in no waste in the process but this has not been compared with the traditional method in terms of degradability and color saturation. The investigation aims to create azo dyes with this new method and with the traditional method suggested by the Chinese University of Hong Kong (CUHK) [10] and the University of New Brunswick (UNB).[11] The azo dyes produced by the traditional method and the new methods will then be compared in terms of three different aspects: colour saturation, percentage yield and degradability. Improvised change to this method will also be explored to see if it will result in improvement in the product.
Methods Materials and Reagent For the first stage of the experiment, concentrated hydrochloric acid HCl, sulfanilic acid, concentrated nitric acid HNO3, ferric nitrate Fe(NO3)2(s), sodium nitrite NaNO2, sodium hydroxide NaOH, sodium chloride NaCl, 1-naphthol, 2-naphthol, salicylic acid, 2-nitroaniline, 3-nitroaniline, 4-nitroaniline and acetonitrile were used. These chemicals were all by courtesy of the company, Bragg and Co. except for sodium hydroxide solution, which was self-prepared using NaOH pearls and a concentration 4 Molar (4M). Prior to each experiment, these chemicals were stored in a cupboard that was maintained at 25°C, at 20% humidity and protected from sunlight. Preparation of the Experiment The experiment has to be carried out under 5°C because diazonium compounds are very unstable at room temperature. A thermal insulator carrier was placed around the ice to create an ice bath, with the temperature being constantly monitored every 5 minutes. After several trials, it was noted that the temperature increased by 3-4°C over the course of one experiment. Thus, a constant supply of ice was added throughout the experiment. The experiments were carried out in a fume cupboard as they all required the use of concentrated hydrochloric acid and some required concentrated nitric acid. In addition, gloves were worn constantly as azo dyes are easily absorbed by the skin. [12]
First Stage of Experiment - synthesising azo dyes The traditional method of synthesising azo dyes was designed according to the guidelines provided by the Chinese University of Hong Kong (CUHK) [10] and the University of New Brunswick (UNB) [11] with appropriate modifications to the experiment for carrying it out in the school laboratory. The new method of synthesising azo dyes was designed according to Velasco, Kinen, 89
Experimental Research de Rossi and Rossi [9]. There were combinations between nitroaniline (2-nitroaniline, 3-nitroaniline, 4-nitroaniline) with the naphthol derivatives (1-naphthol, 2-naphthol, salicylic acid), resulting in 9 combinations of these reagents using the traditional method (A1,A2,A3,B1,B2,B3,C1,C2,C3). The combinations of the reagents and the code are presented in Table 1 below.
Table 1: Combination of the reagents and the coding of combinations Apparatus used for traditional method of synthesising azo dye: Five boiling tubes Two 25ml-Measuring Cylinders Digital Pipettes (0-5ml) Digital Top Pan Balance and weighing boats Boiling Tube Rack Stirring Rod Thermometer Digital Stopwatch Buchner Funnel Procedure of traditional method synthesising azo dye: First, 3 ml of concentrated HCl(aq) solution was poured into a boiling tube (Boiling Tube A) and cooled to 5°C in the ice bath. Then 1.4g nitroaniline, 0.76g NaNO2 and 3 ml water were added to another boiling tube (Boiling Tube B), which was then heated in a water bath until a clear solution was obtained. A glass rod was used to stir the mixture for 5 minutes. In the third boiling tube (Boiling Tube C), a solution made up of 1.48g of naphthol derivative was with 12.5ml of 4M NaOH. All these three solutions were then mixed together in a boiling tube (Boiling Tube D) as shown in Figure 3.2a. It was then placed in the ice water bath for 20 minutes until a precipitate was formed. The precipitate was discarded. Then, the 3 ml of concentrated HCl(aq) in Boiling Tube A was slowly added to the remaining liquid portion in Boiling Tube D as shown in Figure 3.2b. The mixture was then purified using NaCl(s) and vacuum filtration using a Buchner Funnel. The experiment was repeated 9 times using different forms of nitroaniline and naphthol derivatives.
Figure 3.2a (left): Mixture of three solutions Figure 3.2b (right): Azo dye sample after the coupling reaction 90
Environmentally-Friendly Methods of Synthesising Azo Dyes
Modifications: Three modifications were made during the experimental process as follows: Originally test tubes and a round bottom flask were used. However, it was discovered that the openings were too small and that a larger opening was preferred for the ease of putting in a thermometer or glass rod for stirring. Thus, the test tubes were replaced with boiling tubes. The second problem was the warming of the ice bath due to the initial warmness of the apparatus. Subsequently, the equipment was put in an ice water bath 30 minutes prior to each experiment and then dried using paper towel right before their use. This was to ensure that the reaction was done in the coldest environment possible to prevent the evaporation of the diazonium ion. In cold conditions, the water in the atmosphere condensed around the boiling tubes, which may result in dilution of the product. A drying agent, anhydrous calcium chloride, was considered and tested but to no effect. Instead, a cotton wool was used to absorb the moisture. Apparatus used for the new methods of synthesising azo dye: Four boiling tubes Three 25ml Measuring Cylinders Digital Pipettes (0-5ml) Digital Top Pan Balance and weighing boats Boiling Tube Rack Thermometer Digital Stopwatch Stirring Rod Cotton Wool Procedures for newer method synthesising azo dye: There are two different ways of mixing the solution [9]: New Method 1: concentrated HCl(aq) and Fe(NO3)2 (s) First, 0.8mmol Fe(NO3)2 (s) was dissolved in 2ml of acetonitrile (ACN). Later, 4mmol nitroaniline was also dissolved in 10ml ACN. Both of these solutions were cooled in ice. These two mixtures were then mixed together in Boiling Tube E. After that, a mixture of 9.2ml of concentrated HCl(aq) and 10ml of ACN was added dropwise at a constant rate for 20 minutes to the Boiling Tube E. It was cooled and stirred in the ice bath as shown in Figure 3.5.2a. After 2 minutes, a mixture consisting of 0.2mmol naphthol derivatives, 9mmol NaOH and 10ml ACN was added to Boiling Tube F. The substance in Boiling Tube F was then added to Boiling Tube E and the Boiling Tube E was shaken gently. New Method 2: concentrated HCl(aq) and concentrated HNO3(aq) First, 4mmol of nitroaniline was dissolved in 10ml of ACN and cooled in ice. Then, a mixture of 12.4ml of concentrated HCl(aq) dissolved in 10ml of ACN was added dropwise for 20 minutes to 91
Experimental Research a boiling tube (Boiling Tube G). Then, 0.4ml of concentrated HNO3(aq) was added whilst stirring. After 2 minutes, a mixture consisting of 0.2mmol naphthol derivatives, 9 mmol sodium hydroxide and 10ml ACN was added to Boiling Tube H. The substance in boiling tube F was then added to Boiling Tube E and the tube was shaken gently as shown in Figure 3.5.2b.
Figure 3.5.2a (left): Two mixtures prior to the coupling reaction - the mixture on the left is the one with the nitroaniline, whereas the one on the right is the mixture of acetonitrile and HCl. Figure 3.5.2b (right): Gas is being evolved after the coupling reaction These two methods were very similar. In the first method, Fe(NO3)2 (s) was added to ACN before HCl, whereas in the second method, HNO3(aq) was added after HCl. As the methods were not well described in the original study, ways of mixing the reagents were explored during the experimental process as follows: Concentrated HCl(aq) was added to ACN instead of the other way around. This is to prevent the concentrated HCl(aq) from reacting. It was described that concentrated HCl(aq) was added into ACN by drops at a constant rate. When a drop was added, there was slight effervescence. It was noted that the volume of the drop changed the amount of effervescence. Thus, the drops could not be added at a constant rate, as each drop varies in size. So, each drop was added until effervescence from the previous drop had completely dissipated, which was between 0.5 to 1 second. The mixture of phenol, sodium hydroxide and ACN formed two layers. The mixture had to be shaken so that the phenol would be dissolved into the ACN.
92
Environmentally-Friendly Methods of Synthesising Azo Dyes
Figure 3.5.2c: Boiling tubes containing azo dyes synthesized via the New Method. From left to right: B2, C1, B1, B3, C2, C3 as shown in table 1. Addition of Ethanol When producing azo dye using the newer methods, it was noted that there were two layers in the Boiling Tube F and Boiling Tube H. The dark layer was naphthol derivative in ACN, whereas the transparent layer was NaOH(aq). Ethanol is a known solvent for the NaOH(aq) as well as the naphthol derivative. It should dissolve the two mixtures together. In addition, ethanol should theoretically have no effect on the overall composition of the azo dyes, and thus should not alter the colour. Ethanol was tried to be added to the two mixtures so as to achieve a better mixing. One ml of ethanol was added to the Boiling Tube F and Boiling H in procedure 3.4.1 and 3.4.2 before mixing with Boiling Tube E and Boiling G respectively. The difference in appearance before and after adding ethanol is depicted in Figure 3.6a and 3.6b.
Figure 3.6a (left): Mixture of 1-naphthol with NaOH and acetonitrile without ethanol Figure 3.6b (right): Mixture of 1-naphthol with NaOH and acetonitrile with ethanol
93
Experimental Research
Second stage of experiment - characterization of azo dyes for comparison Colour saturation Colour saturation was measured using Mystrica Colorimeter [13] (Figure 4.1a). A cuvette filled with water was first placed into colorimeter for calibration as water has a 100% transmission rate. An azo dye was put into a specific transparent cuvette and then placed into the colorimeter (Figure 4.1b). For every combination, three results were taken and averaged. The lower the value of the % transmission, the higher the colour saturation. Comparisons were made for colour saturation of azo dyes made of different combinations by averaging the colour saturation of each combination using all the 5 methods, i.e. traditional method, New Method 1, New Method 2, New Method 1 with ethanol, New Method 2 with ethanol.
Figure 4.1a (left): Mystrica Colorimeter Figure 4.1b (right): Cuvette containing azo dye was put into the colorimeter Percentage Yield A standard measuring cylinder was used to measure the volume of dyes produced. Percentage yield of azo dyes of each combination using 5 different methods was compared. Degradability The measurement of colour saturation of all the azo dyes produced was repeated 3 weeks after they were produced. Degradability (%) was calculated as the difference between the percentage transmission for an azo dye after the 3-week period and the initial percentage transmission divided by initial percentage transmission. A positive value indicated a decrease in colour saturation. During these 3 weeks, the samples were kept in the same cupboard housing the reagents. Degradability of azo dyes of each combination using 5 different methods was compared.
94
Environmentally-Friendly Methods of Synthesising Azo Dyes
RESULTS First stage - synthesising azo dyes Using the traditional method, combinations of 1-naphthol, 2-naphthol, salicylic acid with 2-Nitroaniline and 4-Nitroaniline resulted in the formation of an azo dye solution. However, combinations of 1-naphthol, 2-naphthol, salicylic acid with 3-Nitroaniline resulted in precipitates instead of an azo dye solution. Figure 4.1 shows a failed preparation between 3-Nitroaniline and 1-Naphthol). The issue could not be solved and was classified as unsuccessful. Azo dyes were successfully produced using newer method 1 and 2 with or without ethanol for all the 9 combinations of naphthol derivative and nitroaniline. Thus, in the second stage, comparison of azo dyes would exclude the combination using 3-nitroaniline by the traditional method.
Figure 5 (top): Failed preparation of A2, due to the effects of isomerisation Second stage - comparison of characteristics of azo dyes produced by various methods Colour Saturation Chart 1 and Table 2 shows the initial colour saturation of azo dyes produced by various methods. The colour saturation of all the combinations of the azo dyes produced by the traditional method was the highest amongst all methods, averaging 5.8% transmission. This was proven to be statistically significant (p < 0.003). This was followed by New Method 1 and 2, which averages 7.51% and 7.52% respectively. The addition of the ethanol to in New Methods 1 & 2, resulted in an even higher value of 7.89% and 8.92% respectively. When comparing the colour saturation of azo dyes produced, B2 resulted in the highest transmission (lowest colour saturation) with an average of 13.08%. Next is A2, which has a 9.73% transmission. This is followed by C2 and B3, which has a transmission of 7.98% and 7.68% respectively. After that is B1, A3 and C3 with 7.03%, 6.90% and 6.85%. Finally, C1 and A1 resulted in the lowest transmission (highest colour saturation), with a transmission of 6.18% and 4.83% respectively. 95
Experimental Research The combinations of 3-nitroaniline with any form of naphthol derivatives were apparently associated with the highest transmission while the combinations formed by 2-nitronaniline had the lowest transmission. In other words, the compounds formed by 3-nitroaniline had the lightest colour towards yellow, whereas the compounds formed by 2-nitroaniline had the darkest colour towards red. The compounds formed from 4-nitroaniline were moderately coloured from red to orange.
Chart 1. Initial colour saturation of different reagents and methods
Table 2. Percentage Transmission of azo dyes measured by colorimeter Percentage Yield Chart 2 and Table 3 shows the percentage yield of azo dyes of various combinations produced by different methods. Generally, the degradability for the traditional method and new method 1 was very low, being 5.17% and 6.15% respectively. This result, however, excluded the combination using 3-nitroaniline for the traditional method as that method has failed to produce an azo compound with 3-nitroaniline. New method 2 had a degradability of 24.62%, which is significantly
96
Environmentally-Friendly Methods of Synthesising Azo Dyes higher than when using the other methods. The addition of ethanol has bought an 8.25% change in degradability for New Method 1, whereas it had no change to New Method 2.
Chart 2. Percentage yield by different reagents and methods
Table 3. Percentage yield for each azo compound Degradability Table 4 shows the degradability for azo dyes over a 3-week period. Generally, the degradability for all the combinations of all the method is relatively high, ranging from 5.15% for the traditional method to 24.6% for New Method 2. The degradability of azo dyes produced by New Method 1 was similar to the traditional method, is 6.15% but the addition of ethanol has increased the degradability to 14.4%. On the other hand, the addition of ethanol to New Method 2 resulted in only 0.1% change.
97
Experimental Research
Table 4. Degradability of azo dyes
Discussion During the experiment, the synthesis of azo dyes using the traditional method was only partially successful, creating 6 out of the 9 proposed combinations. The combinations using 3-nitroaniline were unsuccessful. It was likely that the reactions did not occur due to steric hindrance [14] . However, using the new methods, all 9 of the possible combinations were created successfully. When synthesising azo dye compounds using the new methods, it was observed that the mixture containing the naphthol derivative, sodium hydroxide and acetonitrile did not mix well. Thus, a shaking method was tried although this had not been described in the original study. Ethanol was added in an attempt to dissolve the two layers and improve the mixing. However, the outcome was unsatisfactory. Eventually, the shaking method was adopted in the rest of the experiment for New Method 1 and New Method 2. When comparing the colour saturation of azo dyes, the traditional method produced azo dyes with the highest colour saturation. Yet, the new methods could also produce azo dyes of only slightly lower colour saturation. Interestingly, certain combinations of reagents would produce azo dyes of better colour saturation than other. The combination between 1-naphthol and 2-nitroaniline (A1) resulted in the highest colour saturation whereas the use of 3-nitroaniline with any naphthol derivatives has the weakest colour saturation. It seemed that the colour saturation of azo dyes depended more on the combination of reagents than the method of production. The addition of ethanol did not improve the mixing of the reagents. Neither did it result in improving colour saturation of azo dyes. Ethanol does not change the structure of the azo dye compound. The slight change of colour saturation with the addition of ethanol is mainly due to the change in optical activity of the overall mixture only. When comparing the percentage yield of azo dyes produced using the same amount of reagents but different methods, the traditional method resulted in a much lower yield in comparison to the new methods. This is because the traditional method has generated a large amount of waste 98
Environmentally-Friendly Methods of Synthesising Azo Dyes products, whereas the new methods resulted in no waste at all. However, there is a large difference between New Method 1 and New Method 2. This is because New Method 2 required the use of concentrated nitric acid, which evaporates quickly in air before the coupling reaction, thus resulted in a drop in % yield. It is also noted that the addition of ethanol resulted in a menial change in % yield. Similar to the effect on colour saturation, ethanol did not participate in the reaction and did not alter the chemical characteristics of the azo dye compounds and thus would not affect the yield of azo dyes produced. Concerning the degradability, it was observed that the colour change in azo dye was rapid in the first few days after it was formed and then the change gradually slowed down. The traditional method was most advantageous, which was closely followed by New Method 1. However, New Method 2 had a much larger degradability of 24.6%, which makes this method undesirable if the dye to be used in industry. Furthermore, the addition of ethanol has increased the degradability of New Method 1 by a significant amount, whereas it had no change to New Method 2. Thus, out of all the new methods investigated, only New Method 1 without the addition of ethanol is practical. Statistical analysis surrounding the colour saturation and degradability were performed. The significance of the results collected was able to be tested through general linear modelling and multivariate tests. This report uses a significance level of 0.05. If the p-value is larger than 0.05, it would be tested insignificant, whereas if the p-value is less than 0.05, it would be considered significant. Regarding the results of colour saturation, the suggested hypothesis was statistically significant with a p-value of 0.003, whereas the results on degradability proved insignificant with a p-value of 0.62. However, this only implies that more results were needed to prove the significance rather than invalidating the results. The limitations of the study should also be addressed. For the new methods, concentrated HCl(aq) and ACN mixture was added dropwise at a constant rate over 20 minutes to a mixture of Fe(NO3)2(s), ACN and nitroaniline. It is not easy to keep a constant rate by watching the stopwatch. Instead, when a drop is added until the effervescence produced from the previous drop had completely dissipated, which was between 0.5 to 1 second. Eventually, the duration could not be timed exactly 20 minutes. It is not certain whether this would have any effect on azo dye produced. Additionally, it was discovered that the different wavelengths absorbed by the azo dyes would cause some variations in readings of the percentage transmission when using the colorimeter to measure colour saturation. Visual confirmation has to be carried out for some readings which could be subjected observer bias. Furthermore, statistical analysis showed the results for degradability to be statistically insignificant. Additional repeats would have to be carried out in the future to confirm the validity of the results.
Conclusion The new method proposed by Manuel I. Velasco, Claudio O. Kinen, Rita Hoyos de Rossi, Laura I. Rossi (September 2011) using concentrated hydrochloric acid and ferric nitrate/nitric acid as solvents to create azo dyes leaves no waste material, which means that it has 100% atom economy. Not only that, it has excellent flexibility in combining all the tested reagents, specifically involving 3-nitroaniline, which was not successful using the traditional approach. In addition, the degradability and colour saturation of dyes made by this method was only slightly worse than dyes made via the traditional method. Although the cost of this method is higher, the advantages for the environment poses it a high potential of being used commercially as the standard approach to develop azo dyes. 99
Experimental Research
If the investigation can be carried on in the future, synthesizing other colours will be attempted including dianthrapyrimidine yellow, A type delta indanthrone blue pigment or isoviolanthrone. This will be done if there is increase access to reagents and equipment of a larger variety. Other synthetic routes that are deemed unpractical in a school laboratory, such as using a magnetic solid as a catalyst or the use of supercritical dioxide, may also be tested.
Acknowledgement The statistical analysis was performed using the IBM statistics 24 software with the assistance of Mr Wilfred Wong. The author also wants to extend his gratitude to Mr Geoffrey Li, for providing the experiments with all the necessary reagents. Many thanks should go to the centre coordinator, Mrs Jo Morris, for providing the opportunity to complete this investigation. The author appreciates detailed meetings and discussions with Mrs Lotje Smith, who acted as the supervisor and the examiner, who throughout the investigation has provided guidance to finish the investigation.
References 1. Wagenenigen University (Ed.). (2017, July 2). Azo dyes. Retrieved January 13, 2018 from http://www.food-info.net/ uk/colour/azo.htm 2. Ding, Yi & Freeman, H.S.. (2014). Mordant dye application on cotton fabric. American Association of Textile Chemists and Colorists International Conference, AATCC 2014. 18-27. 3. Abdolhamid Bamoniri, Bi Bi Fatemeh Mirjalili & Naimeh Moshtael-Arani (2014) Environmentally green approach to synthesize azo dyes based on 1-naphthol using nano BF3·SiO2 under solvent-free conditions, Green Chemistry Letters and Reviews, 7:4, 393-403, DOI: 10.1080/17518253.2014.969786 4. Tuba Aksoy, Serkan Erdemir, H. Bekir Yildiz and Mustafa Yilmaz, Novel Water-Soluble Calix[4,6]arene Appended Magnetic Nanoparticles for the Removal of the Carcinogenic Aromatic Amines, Water, Air, & Soil Pollution, 223, 7, (4129) 5. @Article{C5RA18118F, author =”Ahmed, Z. and Welch, C. and Mehl, G. H.”, title =”The design and investigation of the self-assembly of dimers with two nematic phases”, journal =”RSC Adv.”, year =”2015”, volume =”5”, issue =”113”, pages =”93513-93521”, publisher =”The Royal Society of Chemistry”, doi =”10.1039/C5RA18118F”, 6. Gup, R., Giziroglu, E., & Kırkan, B. (2007). Synthesis and spectroscopic properties of new azo-dyes and azometal complexes derived from barbituric acid and aminoquinoline. Dyes and Pigments,73(1), 40-46. doi:10.1016/j. dyepig.2005.10.005 7. Teimouri, A., Chermahini, A. N., & Ghorbani3, M. H. (2013). The green synthesis of new azo dyes derived from salicylic acid derivatives catalyzed via baker’s yeast and solid acid catalysis. Chemjia,24(1), 59-66. 8. Cai, K., He, H., Chang, Y., & Xu, W. (2014). An Efficient and Green Route to Synthesize Azo Compounds through Methyl Nitrite. Green and Sustainable Chemistry,04(03), 111-119. doi:10.4236/gsc.2014.43016 9. Velasco, Manuel & O. Kinen, Claudio & de Rossi, Rita & I. Rossi, Laura. (2011). A green alternative to synthetize azo compounds. Dyes and Pigments. 90. 259-264. 10.1016/j.dyepig.2010.12.009. 10. Chinese University of Hong Kong. (n.d.). Experiment 8 Synthesis of an Azo Dye the Coupling ... Retrieved September 17, 2017, from https://www.bing.com/ c r ? I G = 5 F 6 4 0 0 F C 6 A A 5 4 6 F 4 B 6 4 4 D 3 8 3 E F C C A 0 C 9 & C I D = 1 B 7 3 5 D 0 6 3 2 8 B 6 E E C 2 6 7 C 5 6 D 6 3 3 2 4 6 F 0 C & rd = 1&h=krBPLWfcrN9_bcptS5MOEDVvi5QD_kWXb1_zBGMmDWo&v=1&r=https://www.cuhk.edu.hk/chem/doc/s6_ resourcebk/en-s_expt_08.pdf&p=DevEx,5054.1 11. University of New Brunswick. (n.d.). The Synthesis of Azo dyes. Retrieved August 31, 2017, from https://www.unb. ca/fredericton/science/depts/chemistry/_resources/pdf/axodye.pdf 12. Swedish Chemical Agency. (2015, November 30). Azo Dyes. Retrieved November 21, 2017, from https://www.kemi. se/en/prio-start/chemicals-in-practical-use/substance-groups/azo-dyes 13. Colorimeter. (n.d.). Retrieved March 26, 2018, from http://www.mystrica.com/Colorimeter 14. Yilmaz Ozmen, E. , Erdemir, S. , Yilmaz, M. and Bahadir, M. (2007), Removal of Carcinogenic Direct Azo Dyes from Aqueous Solutions Using Calix[n]arene Derivatives. Clean Soil Air Water, 35: 612-616. doi:10.1002/clen.200700033 15. O’Neill, C., Lopez, A., Esteves, S. et al. Appl Microbiol Biotechnol (2000) 53: 249. https://doi.org/10.1007/ s00253005001
100