COCHAI RS Maai r a Ti a CHI EFEDI TOR & GRAPHI CDESI GNER Li na HEAD OFBI OLOGY
HEAD OFCHEMI STRY
HEAD OFPHYSI CS
Avni
El i z abe t h
Amé l i e
Biology The Evolution of Cancer Ghanage
The theories behind contralateral control Nyneisha
The theory of consciousness Hannah
Chemistry The use of quantum friction as an explanation for the strange behaviour of flowing water in carbon nanotubes Masha
Development of the atomic theory: orbits to orbitals Lydia
MDMA Therapy Liyah
Chemistry Discovery Could Remove Micropollutants from Environment Myuri
Physics Muons – the possible key to revealing undiscovered forces and particles Tanya
How was our solar system formed? Ishani
Quantum Electrodynamics Anantaa
Schrödinger’s Cat – the upgrade? Kat
Theoretics in Action: Geometry of the Universe’s Hidden Dimensions and String Theory Trina
Ghanage
Evolution - one of Darwin’s most well-known theories lies behind the key explanation of how cancer develops and its resistance to drugs. The process of evolution has four main components: variation in the population of a species (as a result of random genetic mutations), competition for resources, selection (due to environmental pressures), and adaptation to the selective pressure. This can be directly applied to the evolution of cancer. The ongoing acquisition of mutations results in the variation of cancerous cells. They then compete for nutrients and oxygen to grow. Selection can actively kill some of the cancer cells but allow others to thrive. For example, some cells rely less on oxygen for metabolism, whereas others can avoid the immune system. Eventually, the cells can adapt to push the immune cells away, allowing them to survive.
A diagram illustrating the evolution of cancer as time progresses (see above)
Advances in computational science mean that AI (Artificial Intelligence) can be used to model a tumour’s history and aid the prediction of its course in the future. This is called an ‘evolutionary trajectory.’ The application of the theory of evolution to cancer can allow doctors to tailor specific plans for each patient according to the type of tumour. The analysis of cancer genomes can help us to understand the dynamic of cancers, which of these cancers grow faster, the events which cause them to grow faster, whether the subclones are growing faster than the parental clones, what happens pre and post treatment and more.
Cancers can either have exponential growth, or logistic growth (where they reach a particular size and subsequently stop growing). The genomic features are different for each, allowing us to predict which ones will keep growing or stop after a certain point. Mathematical models are also being developed whereby mutational patterns are inputted, allowing us to predict the probability that the cancer will evolve. A small biopsy during a colonoscopy alongside genomics can help spot which cells are about to become cancerous and can help prevent it from developing early on. It is usually impossible to kill all cancer cells with one drug due to the cells in different parts of the tumour having different variations in the genetic code. This can cause some cells to be left behind after treatment which can grow back and become resistant. Evolution is apparent here too.
The development of treatment resistance of cancerous cells
New treatments are being found to help combat these problems. Professor Graham of the Institute of Cancer Research is investigating into a new technique where instead of wiping out all the cancer cells, treatment is added or removed depending on the different population of cancer cells in the tumour. Titrating drugs according to the sensitivity of cells is now gaining traction as the relationship between the different cell types in a tumour can be used to our advantage. Resistant cells are also at a disadvantage because they have pumps on the surface of their cells that help them get rid of the drug but this also uses up energy. When the drug is present, the resistant cells can expand. However, when the drug is not present, the cells that are affected by the drug can repopulate the tumour. They are now trying to track the population of these cell types by taking a blood test that can measure how much of the tumour’s DNA is shed into the blood and adapt treatments to strike a balance. An ongoing trial is testing the effect of changing drug doses for patients with ovarian cancer. In conclusion, the theories of evolution have greatly shaped how we view cancer today, from the way it develops to the treatments we can provide to remove cancer cells. As modelling and genome analysis develops, scientists can gain a greater insight into the complex structure of tumours and find ways to prevent cancers growing from an early stage.
References: Science Talk - Applying Charles Darwin’s theories to expose cancer’s secrets - The Institute of Cancer Research, London (icr.ac.uk) Darwin's Theory Of Evolution (darwins-theory-of-evolution.com) Evolution & mathematical modelling of cancer genomes - Bing video
Nyneisha
Brain lateralization, wherein specific tasks are executed by distinct regions within the brain, is a common feature found in both vertebrates, and more recently in invertebrates. There are two different types of control: ipsilateral and contralateral. Ipsilateral control refers to the idea that the left and right brains control the side of the body in which they are located. On the other hand, the contralateral theory suggests that through the crossing over of human motor and sensory fibers, each side of the brain controls opposite sides of the body, as well as receiving input from the opposite eye. [1] As humans have evolved, it has come to be that contralateral control is much more common in our bodies, although it may seem to pose a clear evolutionary disadvantage. If one side of the body experiences damage, ipsilateral control would allow the other side of the body to operate as normal. However, contralateral control would result in brain damage on one side and damage to the body on the other, leaving very little remaining function. [1] Over the years there have been numerous attempts to theorise how and why our brains involve contralateral control, the earliest being the ‘Visual Map Theory’ by Rámon y Cajal. He suggests that within the mammalian visual cortex there is a partial decussation (twist) of the optic nerves which allows ‘continuity of image’. This theory has been refuted by several researchers, most notably in the case of someone who only had ipsilateral visual control, whose vision was no different from someone with contralateral control. Despite this, there is no solid evidence to completely discard the theory, and it has been replaced by more widely accepted ones. [2] Figure 1: An image demonstrating the twisting within the optic tract, as drawn by Cajal in 1898 [4]
The somatic twist theory was proposed by Marcel Kinsbourne and suggests that decussations arose as a result of switching from a ventral (front of the body) nerve cord, a tube of nervous tissue, to a dorsal (back of the body) nerve cord. This process is thought to have taken place in the most primitive vertebrates during the evolution of its species from invertebrates. Supporting evidence has been found in the form of similar signaling molecules, which oversee dorsal to ventral development of the nervous system, found in both vertebrates and invertebrates. [3] The last and most accepted theory is the axial twist hypothesis, in which the embryo turns towards the left during early embryonic development, which results in a loss of bilateral symmetry. In order to correct this, the rostral region of the head makes a 90 degree turn anti-clockwise, whilst the caudal regions turn 90 degrees clockwise, which results in the decussations during further development. [4] Out of the three theories, the axial twist is currently the most widely recognized, as there is a lot of supporting evidence which has been collected during embryonic growth and development. This understanding is subject to change as scientific methods and technologies improve, allowing us to produce new theories to explain evolutionary concepts.
References: [1]https://www.researchgate.net/publication/255732556_The_evolution_of_contralateral_control_of_the_body_by_the_brain_Is_i t_a_protective_mechanism [2] https://thejns.org/focus/view/journals/neurosurg-focus/47/3/article-pE10.xml?tab_body=fulltext#b21 [3] https://neuroscience.stanford.edu/news/ask-neuroscientist-why-does-nervous-systemdecussate#:~:text=It%20is%20called%20the%20%E2%80%9Csomatic,back%2Dside)%20nerve%20cord.&text=Decussations%20ar e%20unique%20to%20vertebrates. [4] https://www.researchgate.net/publication/224805676_An_ancestral_axial_twist_explains_the_contralateral_forebrain_and_the _optic_chiasm_in_vertebrates
Hannah
Consciousness, in the Oxford Dictionary, is defined as “the state of being aware of and responsive to one’s surroundings.” It remains one of the most disputed scientific and philosophical controversies. To prove whether someone is ‘in’ or ‘out’ of consciousness is very subjective and therefore difficult to explain.
A case of a 23-year-old woman, who had sustained a severe brain injury, sparked new scientific breakthroughs in the study of consciousness. The woman was in a non-responsive state, meaning she could not respond to commands or move her body. However, she was able to open her eyes and could go to sleep and wake up. Neuroscientists at the University of Cambridge observed the patient, using an fMRI, an imaging machine that can detect changes in blood flow to measure brain activity. After giving the woman a verbal command such as to imagine herself playing tennis, activity was observed in the supplementary motor area in the brain, proving that despite being unresponsive she had a conscious experience. In a 2017 sleep study, researchers found many people who had no conscious experience during their sleep and had high levels of low-frequency activity in the posterior-cortical region of their brains, just before being jolted from their sleep. However, people who had been dreaming had low levels of low-frequency activity and higher levels of high-frequency activity. This allowed researchers to focus on the posterior-cortical region of the brain to detect consciousness whilst dreaming and even the contents of their dreams during sleep. However, this does not fully explain the mechanisms behind being in a state of consciousness. The ‘integrated information theory’ by Giulio Tononi could help provide an explanation as to what consciousness is. Tononi states that a conscious experience is described as structured, meaning one can distinguish the relative distance objects are from each other, for example when looking at the space around you. A conscious experience can also be differentiated, which means there are an infinite number of possible experiences that could occur in a particular circumstance as well as integrated, whereby all the information from the senses is combined into a sense of consciousness. In accordance with his theory, the more information processed to contribute to one experience, the higher the level of consciousness. For example, memories are a combination of all the senses to form a narrative that is a part of the conscious experience. Tononi also provides backing for his theory using the example of people with different forms of brain damage. The cerebellum is the region of the brain which functions to maintain balance and posture of the body. It contains four times as many neurons as the cortex (often associated with consciousness, thought, and reasoning) yet some people do not have a cerebellum and are still capable of having normal conscious experiences proving that it is not one singular part of the body or experience that forms consciousness. Despite his theory not having enough evidence to be established as the answer to the puzzle that is consciousness, it provides many of the pieces required for other scientists to build upon to come to a conclusion. One specific issue with the proof of this
theory is that the information integration of the brain needs to be measured, however as the number of nodes you are measuring increases, the time taken to measure them increases exponentially. Daniel Toker, a neuroscientist at the University of California Berkeley, has proposed a shortcut calculation that could bring this time down to a sheer matter of minutes. If the ‘integration information theory’ is proven correct it would have ground-breaking implications for many aspects of life as we know it. If proof of consciousness was solved by this theory, it could even allow for the investigation into whether artificial intelligence could gain consciousness. Tononi states “if integrated information theory is correct, computers could behave exactly like you and me … Again, it comes down to that question of whether intelligent behaviour has to arise from consciousness.”
References: How FMRI works. 2022. How FMRI works. [online] Available at: <https://www.open.edu/openlearn/body-mind/health/healthsciences/how-fmri-works> [Accessed 20 February 2022]. Dr. Sanchari Sinha Dutta, P., 2022. Detecting Hidden Consciousness with EEG. [online] News-Medical.net. Available at: <https://www.news-medical.net/health/Detecting-Hidden-Consciousness-with-EEG.aspx> [Accessed 20 February 2022]. Nature.com. 2022. Decoding the neuroscience of consciousness. [online] Available at: <https://www.nature.com/articles/d41586-019-02207-1> [Accessed 20 February 2022]. BBC. (n.d.). Are we close to solving the puzzle of consciousness? BBC Future. Retrieved February 20, 2022, from https://www.bbc.com/future/article/20190326-are-we-close-to-solving-the-puzzle-of-consciousness
Masha
Introduction Quantum friction is a theory that can be used to explain certain peculiarities in the flow of water. It is based on the idea that water molecules interact with certain surfaces on a quantum level, giving rise to a resistive force. (3) It is a relatively new discovery that could be used to improve water desalination processes and other applications.
The discovery In the macroscopic world, the prediction of water flow, whilst complicated to understand fully, appears to follow general common sense. The more irregular a surface, and the more surface the water is in contact with, the less smoothly it flows. In larger pipes, there are fewer areas of contact between the water and the pipe relative to the volume of water flowing, and the friction the water experiences is reduced. (1) However, in an experiment conducted around two decades ago, which looked at the flow of water through membranes made of carbon nanotubes, an interesting result was found. Water flowed more quickly through membranes composed of narrower nanotubes and seemed to experience less friction. (1,2) Carbon nanotubes are cylindrical molecules made of a single layer of graphene, a two-dimensional allotrope of carbon in which each carbon atom is bonded to three others. (4)
Figure 1 graphic of a carbon nanotube (4)
Since they are composed of a single uniform layer of carbon atoms, one would expect that they would provide little hindrance to the flow of water. It was therefore hard to explain the phenomenon observed. However, a new study (5) published in February this year was able to use the idea of quantum friction to shed light on this long-unexplained matter. Instead of only looking at the interactions between the water molecules and carbon atoms, they modelled the effects of subatomic interactions between their electrons. In this way they were able to consider the nanotube walls as a fluctuating pool of electrons which could ripple when disturbed, instead of as a smooth and inert plane. Since charge on water molecules is not evenly spread, the electrons in the nanotube move as a group in response to their flow, producing a sort of quantum texture and hindering their movement. (2)
Figure 2 movement of electrons with flowing water (5)
The obstruction of flow is reduced in narrower nanotubes because there are fewer electrons to add to this effect. The primary difficulty in making this discovery was the exact mathematical modelling of these effects, as detailed equations had to be made which utilised mathematics different to those of ordinary fluid dynamics. (2)
Implications of the discovery Whilst this discovery is very much in the field of theoretical chemistry and physics, it should not be reduced to a mere explanation for a quirk in the scientific field and could certainly have real world applications. The research will probably be expanded to consider other liquids and nanotube materials and could be used to construct surfaces with the purpose of controlling how much friction liquids experience. It could also have applications in water desalination, as water experiences friction in its movement through filters, which could be reduced by using nanotube membranes instead. (2) References: 1.
Chen Ly (2022). Quantum friction explains strange way water flows through nanotubes. [online] New Scientist. Available at: https://www.newscientist.com/article/2306900-quantum-friction-explains-strange-way-water-flows-throughnanotubes/ [Accessed 5 Mar. 2022].
2.
Padavic-Callaghan, K. (2022). Quantum Friction Explains Water’s Freaky Flow. [online] Scientific American. Available at: https://www.scientificamerican.com/article/quantum-friction-explains-waters-freaky-flow/ [Accessed 5 Mar. 2022]. 3. 4. 5.
Silveirinha, M.G. (2014). Theory of quantum friction. New Journal of Physics, 16(6), p.063011.
Ren, G. (n.d.). carbon nanotube | Properties & Uses. [online] Encyclopedia Britannica. Available at: https://www.britannica.com/science/carbon-nanotube [Accessed 5 Mar. 2022].
Kavokine, N., Bocquet, M.-L. and Bocquet, L. (2022b). Fluctuation-induced quantum friction in nanoscale water flows. Nature, [online] 602(7895), pp.84–90. Available at: https://www.nature.com/articles/s41586-021-04284-7 [Accessed 5 Mar. 2022].
Lydia
Rutherford In Rutherford’s nuclear model, the atom contains mostly empty space with a tiny positively charged nucleus in the centre and electrons orbiting the nucleus some distance away, similar to planets orbiting the sun in the solar system (1). Bohr realised Rutherford’s model was not possible because as the electrons revolve around the nucleus, they emit electromagnetic radiation and would therefore lose energy and eventually spiral into the nucleus. As a result, the atom would be very unstable (2).
Bohr’s atomic model To fix the Rutherford model’s stability issue, in 1913 Niels Bohr proposed a new model which was based on Rutherford’s model with some modifications (1). In this new model, electrons revolve the nucleus in circular orbits of fixed sizes and energies at fixed distances from the nucleus. In these orbits, electrons do not have to release energy as they revolve but instead exist in states of constant energy (2); the further away from the nucleus the orbit is, the higher its energy (2). Bohr suggested that electrons can jump from orbital to orbital when energy is absorbed or released in fixed quanta (1), meaning that when an electron jumps an orbit closer to the nucleus the amount of energy emitted must equal to the difference in energy between the two orbits and similarly when an electron jumps an orbit away from the nucleus the energy it absorbs must equal to the difference in energy between the two orbits (1). However, electrons can never exist in spaces between the orbits (2). Electrons occupying the orbit closest to the nucleus cannot lose more energy so their orbit and the atom is stable (1).
Bohr developed his theory of the atom with the help of Planck’s constant (1), which calculates the intensity of blackbody radiation according to its frequency. Using Planck’s equation, Bohr was able to obtain a formula for hydrogen’s energy levels (1) which led him to his conclusion. Unfortunately, Bohr’s theory of the atom can only apply to hydrogen atoms, ionised helium and double ionised lithium (all three have only one electron orbiting the nucleus) and does not work for atoms with more than one electron (3). Bohr’s model ignores the intense electrostatic repulsion between electrons; when multiple electrons are clustered together, they would send each other flying as like charges repel (3). Furthermore, his model incorrectly assumes that electrons behave like solid particles (4). In 1924, Louis de Broglie proposed that all matter and radiation possesses wave-particle duality (displaying both particle and wave-like nature) (1). Broglie’s hypothesis was later proven to be true for electrons in the famous Davisson-Germer experiment (5) and this was the end of Bohr’s model.
Improving Bohr’s theory In 1926, Erwin Schrödinger developed a new model of the atom. Influenced by de Broglie’s work, Schrödinger thought of electrons as waves (9). He substituted the properties (e.g. mass and charge) of hydrogen into a wave equation - the Schrödinger equation. When he solved the equation, he found the shapes of a hydrogen atom (6):
(7)
Each shape represents the electron in hydrogen in a different orbital (6) (the name orbital replaced orbit because these shapes demonstrated that electrons do not revolve the nucleus in circular orbits). In each shape, the glowing blobs make up one orbital and house one electron, (6); an orbital indicates where the electron is likely to be 90% of the time, (it is important to note that orbitals can contain up to two electrons, where these particles have opposite spin) (8). Schrödinger’s model is in accordance with Heisenberg’s uncertainty principle because in this model the precise location of the electron cannot be determined if its velocity/direction is known (9). Different regions of the orbitals in the image have different colours; the colour of a region corresponds to the likelihood of finding the electron there (6). Scientists who came after Schrödinger used his hydrogen model as a basis and were able to discover the shapes of other atoms (6).
References; (1)
https://www.britannica.com/science/atom/Development-of-atomic-theory 28/2/22
(2) https://chem.libretexts.org/Bookshelves/Introductory_Chemistry/Introductory_Chemistry_(CK12)/05%3A_Electrons_in_Atoms/5.06%3A_Bohr%27s_Atomic_Model 24/2/22 (3) https://www.scienceabc.com/pure-sciences/what-is-bohrs-atomic-theory.html 24/2/22 (4) https://kaiserscience.wordpress.com/physics/modern-physics/schrodinger-model-of-the-atom/ 27/2/22 (5) https://en.wikipedia.org/wiki/Wave–particle_duality 27/2/22 (6) https://youtu.be/BMIvWz-7GmU 28/2/22 (7) https://en.wikipedia.org/wiki/Atomic_orbital 28/2/22 (8) https://www.khanacademy.org/science/physics/quantum-physics/quantum-numbers-and-orbitals/a/the-quantummechanical-model-of-the-atom 28/2/22 (9) https://www.chemicals.co.uk/blog/a-level-chemistry-revision-physical-chemistry-atomic-structure 28/2/22
Liyah
Introduction: MDMA (3,4-methylenedioxymethamphetamine), commonly known as Ecstasy or Molly, is being used in medical trials to investigate its potential use for treating PTSD and depression. This could be extremely useful since there are few treatments available for these patients. Unfortunately, there is debate on whether the disadvantages of neurotoxicity and the rick of misuse, outweigh the benefits of it as a viable treatment option for PTSD, (David Newcombe, 2018). “In the 2014 Adult Psychiatric Morbidity Survey, 3.7% of men and 5.1% of women screened positive for PTSD” in the UK, (Baker, 2021) . Around 15 million adults have PTSD during a given year. A viable treatment is crucial. (Ben Sessa, 2019) Research into using MDMA as a form of treatment started in the late 1960s by Leo Zeff after LSD (lysergic acid diethylamide) had been banned. Psychedelic therapists were attempting to find another drug to enhance psychotherapy. Psychotherapy, also known as talk therapy, can help with a wide range of mental illnesses through either eliminating or controlling symptoms and improving general wellbeing. Using MDMA gives the patient a greater feeling of empathy, which allows the therapist and patient to remember and process traumatic memories more easily. MDMA produced a calmer state of the patient than LSD making it more suited to be used in psychotherapy. (Ben Sessa, 2019)
What is MDMA and how does it work? MDMA is manufactured from safrole, isosafrole, piperonal and 3,4-methylenedioxyphenyl2-propanone (PMK). Safrole is the start material that the others can be synthesised from. Merck’s 1914 MDMA patent synthesised MDMA by reacting safrole and hydrobromic acid to form bromosafrole, which would then be converted to MDMA by using methylamine. Usually the “Leuckart route” is used (this converts aldehydes or ketones to amines by reductive amination with heat) or reductive aminations (process where there is a removal of oxygen atoms and an addition of an amino group) from PMK to form racemic MDMA. MDMA is in the group of ring-substituted phenethylamines and its molecular formula is C₁₁H₁₅NO₂. It has similar structure to methamphetamine and the hallucinogen mescaline due to its phenethylamine structure. Normally phenethylamines without ring substitutions act as stimulants but ring substitution can cause a change in the pharmacological properties. The chemicals in the drug increase the release of three neurotransmitters (chemical messengers in the brain) of the ‘feel good’ hormones like serotonin, norepinephrine and dopamine or block their reabsorption; either way more ‘feel good’ hormones are in the brain at the same time. So, the drug creates a sense of empathy, self-awareness, sensory pleasure, more energy, less anxiety, the ability to openly discuss your feelings, and differences on how you conceive space and time (Smith, 2021). Unfortunately, large amounts of serotonin being released means the brain has a decrease of other important neurotransmitters which can cause the negative aftereffects of taking MDMA. (NIDA, 2017)
It seems like a successful treatment since it helps eliminate most of the crucial symptoms of the condition, such as overwhelming anxiety. Moreover, if it aids a person to face traumatic memories, communicate exactly what they are feeling and get support for their trauma, this could be a breakthrough for treating these conditions. The current results from the trial study show that PTSD symptoms can be controlled or reduced after 2 to 3 sessions. And the benefits from the treatment are long term. One study discovered that 67% of people reported that they no longer met the criteria for PTSD a year after they finished the MDMA therapy. (Smith, 2021)
Advantages and disadvantages of using MDMA therapy: The short-term side effects of taking MDMA include nausea, muscle cramping, involuntary teeth clenching, blurred vision, chills and sweating/hyperthermia. The long-term side effects of the drug are liver damage; however, this is if the drug is used continually since the drug is digested through the liver. Another longer-term effect is neurotoxicity. Neurotoxicity is when exposure to toxic substances changes the normal activity of the nervous system which can eventually cause neurons to be damaged or even die. This is harmful since neurons are important for transmitting and processing signals in the brain and the rest of the nervous system (NIH, 2019). MDMA-caused neurotoxicity can be seen in anatomical changes in neuron axon structure and a consistent reduction in brain serotonin levels. There are worries that damage to brain-serotonin levels can occur and this could be quite harmful; serotonin helps with sleep, appetite, memory and mood regulation. However, studies also show that long term depression and PTSD that is not treated can also cause brain damage by reducing the size of the grey matter volume in your brain and the hippocampus area, (which is important for learning and memory) (Wiginton, 2020) (Thatcher, 2019) . Toxicity depends on multiple factors such as the specific susceptibility of the patient as well as the circumstances that the MDMA is being used. A questionnaire sent a year after the final session of MDMA showed a huge increase in wellbeing in the patients showing its advantages. 84% reported having improved feelings of well-being, 71% had fewer nightmares, 69% had less anxiety and 66% had improved sleep. (C. Michael White, 2021)
Conclusion: In conclusion although MDMA therapy seems to be useful in managing symptoms of PTSD as well as treating the cause of the stress there are some disadvantages such as liver damage and neurotoxicity which could make it more harmful to the patient.
Structure of MDMA
References: Baker, C. (2021). Mental health statistics (England). House of Commons library Ben Sessa, L. H. (2019, March 20). A Review of 3,4-methylenedioxymethamphetamine (MDMA)-Assisted Psychotherapy. Retrieved March 2, 2022, from https://doi.org/10.3389/fpsyt.2019.00138 C. Michael White, A. V. (2021, December 16). Latest trials confirm the benefits of MDMA – the drug in ecstasy – for treating PTSD. Retrieved March 2, 2022, from https://theconversation.com/latest-trials-confirm-the-benefits-of-mdma-the-drug-inecstasy-for-treating-ptsd-173070 David Newcombe, S. S. (2018). Methylenedioxymethamphetamine (MDMA) in Psychiatry: Pros, Cons, and Suggestions. J Clin Psychopharmacol , 38(6), 632-638. NIDA. (2017, September). National Institute on Drug Abuse. Retrieved March 2, 2022, from https://nida.nih.gov/publications/research-reports/mdma-ecstasy-abuse/what-are-mdmas-effects-on-brain NIH. (2019). National Institute of Neurological Disorders and Stroke. Retrieved March 2, 2022, from https://www.ninds.nih.gov/Disorders/All-Disorders/Neurotoxicity-InformationPage#:~:text=Definition,parts%20of%20the%20nervous%20system Smith, M. W. (2021, May 13). WebMD. Retrieved March 2, 2022, from https://www.webmd.com/mental-health/what-is-mdmaassisted-therapy-ptsd Thatcher, T. (2019, February 4). Highland Springs Specialty Clinic. Retrieved March 2, 2022, from https://highlandspringsclinic.org/blog/can-emotional-trauma-cause-braindamage/#:~:text=According%20to%20recent%20studies%2C%20Emotional,emotional%20trauma%20upon%20the%20brain Wiginton, K. (2020, July 28). WebMD. Retrieved March 2, 2022, from https://www.webmd.com/depression/depressionphysical-effects-brain
Myuri
The peremptory rise of environmentalism has increased much-needed awareness and recognition of the importance of protecting water resources. Across the globe, freshwater scarcity, (defined as the lack of sufficient available water resources to meet the demands of water usage within a region) is evident (1); on account of this, we must collectively/ actively turn to more sustainable methods of water use, including reusing processed water following treatment to reduce environmental impacts whilst proving to also be cost effective. Micropollutants are biological or chemical contaminants that consist of primarily anthropogenic substances such as pesticides, antibiotics and industrial chemicals, alongside natural substances which make their way into the ground and surface waters in trace quantities (2). Due to the potential of these contaminants to quickly bioaccumulate and have far reaching effects depending on their chemical properties, it raises considerable toxicological concerns for health, economic, and environmental impacts, which become an issue of national and global concern. New research has identified an innovative chemical approach to combat this problem, use of nanoparticles. This technique’s employment could practicably revolutionise strategies aimed to remove micropollutants from the environment. (4) Micropollutants can be combated with molecules, and with nanoparticles; there is new evidence from studying the interactions of ligands (ions or neutral molecules that bond to a central metal atom or ion) in relation to the surface of nanoparticles (3). The surface structure, facets and size of a nanoparticle is intrinsically tied to the particle’s potential application (4). Correspondingly, when the particle is larger since it has more available space in comparison to smaller particles therefore fitting more atoms inside, whilst smaller particles have a greater surface area to volume ratio allowing atoms to sit atop for the utilisation of them in processes such adsorption. For that reason, the particle’s shape is directly correlated to the different types of structures the atoms and molecules forming on such surfaces (8). In the case of nanoscale material, the adsorption of molecules on their surfaces also protects the surface and makes it more stable, especially effective to control how nanoscale particles grow and become their eventual shape. However, the study of interactions of ligands in relation to the surface of nanoparticles sustained to be particularly challenging to researchers because of nanoparticle surfaces being uneven and multifaceted in addition to there being other molecules, thus requiring an incredibly high spatial resolution to be scrutinised (4).
An example; This diagram is an industrial representation of using nanoparticles to remove micropollutants by adsorption. Here, the micropollutants pyrene and benzo(a)pyrene are being adsorped by the synthesis of green iron oxide particles; thus, the micropollutants can be removed from contaminated water.
Through the application of a pioneering imaging technique to attain high-resolution images of such interactions, new understanding of the affinity of ligand adsorption as well as the differing co-operations of multiple ligands with one another was found. Henceforth, the alteration of the concentration of an individual ligand can exploit the shape of the particle it is attached to, potentially creating a strategy to remove micropollutants, (by changing the shapes of nanoparticles so they become complementary to the target micropollutant). Moreover, it could be utilised for a plethora of daily applications, such as developing chemical sensors that are sensitive at an extremely low level to a specific chemical in the environment, enabling scientists to target particular micropollutants (5). Simultaneously, a method called COMPetition Enabled Imaging Technique with SuperResolution (COMPEITS) successfully obtains nanometre resolution to explore the nooks and crannies of the multiple surface facets and quantify the affinity, (or strength), of a ligand’s adsorption (5). Through this medium, Professor Peng Chen said: “For us, this has opened more possibilities. For example, one way to remove micropollutants, such as pesticides, from the environment is to adsorb micro-portions on the surface of some adsorbent particle. After it is adsorbed on the surface of the particle, if the particle is a catalyst, it can catalyse the destruction of the micropollutants.” (6) In summary, micropollutants need to be removed during the wastewater treatment process due to the risk they pose human health and wildlife. One option –currently being developedto remove micropollutants from wastewater is the use of nanoparticles. And in particular the use of ligands in conjunction with nanoparticles; changing the concentration of a particular ligand influences the shape of the nanoparticle it is bound to and thus that particle’s ability to adsorb micropollutants. In the words of D r James Parker, programme manager of the US Army Research Laboratory, “Professor Peng Chen’s work [(regarding adsorption of ligands and nanoparticles)] allows for deep insights into molecular adsorption processes, which is important to understand for designing molecular sensors, catalysts, and schemes to clean up micropollutants in the environment”(8).
References: (1)https://www.sciencedaily.com/terms/water_scarcity.htm#:~:text=Water%20scarcity%20is%20the%20lack,month%20out%20of %20every%20year. (2)https://www.frontiersin.org/research-topics/20684/micropollutants-in-the-aquaticenvironment#:~:text=Micropollutants%20(MPs)%20consist%20of%20natural,their%20potentially%20damaging%20environmental %20impacts. (3)https://chem.libretexts.org/Bookshelves/General_Chemistry/Map%3A_General_Chemistry_(Petrucci_et_al.)/24%3A_Comple x_Ions_and_Coordination_Compounds/24.02%3A_Ligands#:~:text=Ligands%20are%20ions%20or%20neutral,bonds%20with%20t he%20central%20atom (4)https://www.sciencedaily.com/releases/2021/07/210714110538.htm (5)https://www.innovationnewsnetwork.com/removing-micropollutants-from-the-environment-with-chemistrydiscovery/13260/ (6)https://www.indianaenvironmentalreporter.org/posts/new-chemistry-finding-could-remove-micropollutants-fromenvironment (7)https://arviatechnology.com/micropollutant-removal/ (8)https://www.eurekalert.org/news-releases/466544 (All accessed March 2022) (9) Hassan, Saad S M, et al. “Removal of Pyrene and Benzo(A)Pyrene Micropollutant from Water via Adsorption by Green Synthesized Iron Oxide Nanoparticles.” Advances in Natural Sciences: Nanoscience and Nanotechnology, vol. 9, no. 1, 29 Jan. 2018, p. 015006, 10.1088/2043-6254/aaa6f0. Accessed 1 Apr. 2022.
Tanya
The Standard Model – the theory that classifies all known elementary particles and the forces associated with them – has demonstrated success by providing accurate predictions. However, most scientists believe that it is incomplete as it leaves some phenomena unexplained; in particular, the muon seems to be an anomaly that is exciting to physicists. [1] Muons are particles similar to electrons but with a much greater mass, therefore they have more energy and should react strongly with other heavy particles. They occur in nature when cosmic rays strike Earth’s atmosphere, and they are relatively common – a person would have around 30 muons going through them at any moment. However, unlike electrons they don’t have an infinite lifetime, rather they live for one/500,000th of a second. Similar to electrons, muons behave like they have a tiny internal magnet; in a strong magnetic field, the direction of the muon’s magnet wobbles – also known as precession. The strength of this internal magnet determines the rate the muon precesses in a magnetic field. This is described by a number known as the g-factor. In April 2021 the results of the Muon g-2 experiment were released confirming previous findings that muon’s properties differ slightly from predictions. The Fermilab g-2 experiment reuses the 50-foot superconducting magnetic storage ring used in the 2001 experiment. The g-2 experiment works by sending a beam of muons into the storage ring, where they can circulate at nearly the speed of light thousands of times. The muons interact with subatomic particles popping in and out of existence. These The Photo: Reidar Hahn, Fermilab - The 50-footinteractions affect the value of the g-factor, diameter superconducting magnetic storage resulting in the muon’s precession slightly ring at Fermilab changing its speed. This is predicted by the Standard Model, however, if the muons were to interact with particles or forces not accounted for by the Standard Model, the g-factor would change even more. Renee Fatemi, the simulations manager for the Muon g-2 experiment believes “This is strong evidence that the muon is sensitive to something that is not in our best theory.” [2] A predecessor experiment in 2001 hinted that the muon’s behaviour disagreed with the Standard Model and the new results from the g-2 experiment at Fermilab agree with this. A new experiment at Fermilab is trying to measure these results to a much higher level of precision, to find out whether these results can be classified as a discovery rather than just be put down to chance.
Muons seem to be more magnetic than the Standard Model predicts, this is thought to be a sign of the possible existence of some undiscovered particles or forces. ‘Muons act as a window into the subatomic world and could be interacting with yet undiscovered particles or forces.’ These new forces or particles may be able to explain one of the unknowns of physics, such as dark matter for instance, for which there must be a particle that doesn’t interact with light. Technology had been revolutionised in the 1900s after scientists were able to understand and manipulate the electron. The muon is another particle that could provide useful results once we learn to manipulate it.
The two results from the Fermilab and Brookhaven experiments show strong evidence that muons diverge from the Standard Model prediction. Image: Ryan Postel, Fermilab/Muon g-2 collaboration
References: [1]
https://www.newscientist.com/article/mg25333731-000-alex-keshavarzi-interview-how-muons-could-reveal-exotic-newphysics/
[2]
https://news.fnal.gov/2021/04/first-results-from-fermilabs-muon-g-2-experiment-strengthen-evidence-of-new-physics/ https://www.nature.com/articles/d41586-021-010338#:~:text=According%20to%20the%20standard%20model,rates%20that%20are%20nearly%20equal. https://physicsworld.com/a/the-muons-theory-defying-magnetism-is-confirmed-by-new-experiment/ https://science.howstuffworks.com/muon.htm
Ishani
Our solar system is approximately 4.56 billion years old and, as we all know, has 8 planets orbiting it along with several dwarf planets and smaller bodies. Humans have been speculating about the formation of the solar system for thousands of years, going back to the creation stories of ancient civilisations. Scientists now mostly accept the standard model of planetary formation which was first proposed by Otto Schmidt in 1948 and has since been researched and refined. The most widely accepted theory of planetary formation is known as ‘core accretion’ and it states that planets form in the protoplanetary disc of gas and dust around a newly formed star[1]. Within this disc material collides and sticks together until planetesimals are formed. Collisions between these planetesimals then go on to form planetary embryos which then form planets like the once we can see clearly in our sky. Protoplanetary discs have been imaged by astronomers and in 2018 two teams published the discovery of forming planets in the protoplanetary disc around the star HD 163296 in the constellation Sagittarius. The teams analysed the movement of carbon monoxide in the disc using data from the Atacama Large Millimetre/submillimetre Array to determine that large objects (planets) must be disturbing the otherwise orderly and predictable motion of the gas[2]. This is key evidence for the core accretion theory as it shows us that planets do form in the discs around newly formed stars (the star HD 163296 is only approximately 4 million years old). You may be wondering how when miniscule objects collide (smaller than the thickness of human hair) they stick together, because the overall gravity would not be large enough to overcome the natural rebound of the objects. This is one problem for the core accretion theory because it limits the forming of planets from objects that manage to overcome this and grow to centimetre size at which point it becomes easier for them to grow further. However, experiments have shown that when two particles of sufficiently different size collide the smaller one will rebound leaving up to half its mass behind. This means that it is possible for small grains to collide and grow enough in size to form larger objects.[3] Furthermore, in June 2010 the Japanese spacecraft Hayabusa brought home images and samples from the asteroid Itokawa. The images showed us a peanut shaped object of rubble and pebbles bound together by its own gravity. This is an example of what a small planetesimal may have appeared as before it gained enough mass to coalesce into a dense ball. The two main problems with the core accretion theory are: the timescales of planetary formation and the migration of planets. According to models of the theory it would take longer for the planets to form than for the protoplanetary disc to be ‘consumed’ by the star. This is, obviously, a major problem for the theory. Type 1 migration of early planets is when, due to gravitational interactions, protoplanets spiral into the star in as little as 100,000 years. Astronomers Paul Cresswell and Richard Nelson developed a computer model to test 1
(Than, 2006) (Kaufmann, 2018) 3 (Tasker, 2017) 2
whether gravitational interactions between multiple protoplanets would slow Type 1 migration sufficiently to allow them time to grow into gas giants. Unfortunately, the model showed that it would not work like that. The problem of migration has lead Alan Boss, a planet formation expert at the Carnegie institute of Washington, to propose a new theory known as ‘disc instability’. According to disc instability, the protoplanetary disc is not smooth and areas of higher density collapse in on themselves to form planets. In this way a Jupiter-sized clump could form in as little as 1,000 years, solving the timescale problem with core accretion. In 2019 a planet with 0.46 times the mass of Jupiter, planet GJ 3512b, was found orbiting a red dwarf star. For core accretion to work the mass of the protoplanetary disc has to be proportional to the mass of the star so, in this case, the planet is too big for the star, yet it exists. Here, disc instability may have been at work as it would mean the disc formed spiral arms which then collapsed to form the planets which would allow for such a massive planet as it does not rely on collisions.[4] Overall, the core accretion theory is currently the most widely accepted theory for planetary formation although it poses problems. Alan Boss is convinced that disc instability is at work in at least some systems which core accretion cannot account for, such as the GJ 3512 system. Some astronomers now believe that both theories work but in different systems whilst others are working on hybrid theories of the two. The discovery of more exoplanets and the imaging of more forming solar systems will definitely drive the development of these theories.
ALMA image of the protoplanetary disk surrounding the young star HD 163296 as seen in dust. ALMA: ESO/NAOJ/NRAO; A. ISELLA; B. SAXTON NRAO/AUI/NSF.
Asteroid Itokawa. JAXA
4
(Anderson, 2019)
References Anderson, P. S. (2019, October 8). ‘Impossible’ exoplanet and an alternate planet-formation theory. Retrieved from EarthSky: https://earthsky.org/space/theories-planet-formation-gj-3512b-favors-disk-instability/ Kaufmann, M. (2018, June 22). Planets Still Forming Detected in a Protoplanetary Disk. Retrieved from Astrobiology at NASA: https://astrobiology.nasa.gov/news/planets-still-forming-detected-in-a-protoplanetary-disk/ Tasker, E. (2017). The Record Breaking Building Project. In E. Tasker, The Planet Factory: Exoplanets and the Search for a Second Earth (p. 44). London: Bloomsbury. Than, K. (2006, March 28). Death Spiral: Why Theorists Can’t Make Solar Systems. Retrieved from Space.com: https://www.space.com/2206-death-spiral-theorists-cant-solar-systems.html
Anantaa
There are 3 types of people in this world: Those who understand quantum electrodynamics, those who do not understand quantum electrodynamics, and those who both simultaneously do and do not understand quantum electrodynamics… Quantum electrodynamics has been described as one of the most accurate theories in all of physics and is incredibly important to many other developing theories. Though its name gives us the impression of an incredibly complicated concept, the theoretic side of it is fairly simple (or so I hope). The innate idea behind quantum electrodynamics (QED for short), is looking at electricity through the perspective of small particles. It has been contributed to by many famous physicists, such as Paul Dirac and Richard Feynman (more on that later), and there is an abundance of difficult math associated with it. As I am no Einstein, I can only explain the theoretical side, but I hope that it will be just as interesting.
What happens usually? QED deals with the interaction of particles with what we have come to know as an electrical force field. However, I cannot begin to delve into the quantum realm, without first dealing with what happens on a macro scale. You may recall some of this from middle school physics lessons. Like charges repel and different charges attract. In Physics, the force behind these interactions is electromagnetism, which generates an electromagnetic force field, causing the acceleration of two objects, either towards or away from each other. So, if you had two positively charged rubber balls, for instance, an electromagnetic force field would be created between them, forcing them to accelerate in opposite directions. Unfortunately, at a quantum level, the matter becomes more complicated.
Coming up with the theory The path towards QED began with Einstein’s theory of special relativity, which preceded general relativity. This involves the equation that has made Albert Einstein quite so famous: E = mc2 . This stands for ‘energy = mass x the speed of light squared’ and essentially tells us how speed effects mass, space and time. As an object reaches the speed of light, its mass becomes infinite and therefore, the energy required to move it becomes infinite, all impossible. Nothing that has mass can reach the speed of light. Furthermore, the equation tells us that energy and mass are interchangeable, different forms of the same thing, with the ability to turn into each other. These concepts were taken on by Paul Dirac in 1928, when he came up with an equation that merged quantum mechanics with special relativity, resulting in a formula that allowed you to describe both the motion and spin of electrons. Without Dirac’s work, Richard Feynman, Sun-Itro Tomonga and Julian Schwinger, would not have theorised QED. Though these three were the ones who won a Nobel Prize for their work in this field (no pun intended), it is thought that many others contributed greatly to it.
Quantum Electrodynamics I have come thus far in the article, without yet taking you through the theory, but I am afraid I must take one last tangent before introducing the wonders of QED. A central part of it, is virtual photons, which I predict may be an alien concept to all of you. Photons are described as packets of light. You may be familiar with the concept that light is simply electromagnetic waves, which is true on some occasions. However, light possesses a quality named particle-wave duality, a phenomenon reflected in much of modern-day physics. It is rather confusing that Possible appearance of a photon. light is constantly switching between behaving like a particle and a wave, but for our purposes, we must think of it as potons, packets of light, not dissimilar to particles. Now, you will note that I specifically said, virtual photons. Virtual particles are another strange phenomenon within quantum physics. They are particles that can pop in and out of existence from nothing, due to the probabilities associated with quantum particles. So, a virtual photon, is a packet of light, or energy that can pop in and out of existence with no clear origin. I regret to inform you that it does indeed, get stranger. You will remember that right at the beginning, I said the orchestrator of electromagnetic interactions, was the force, electromagnetism. Now, QED describes this force as mediated by the exchange of virtual photons. Electromagnetism isn’t a force, simply the name for the effects of photon exchange. I told you things got stranger (not to be confused by the TV show…. which I highly recommend, by the way). Imagine you had two electrons about to collide with each other. On a macro scale the negative charges repel each other, and a force field forces them apart. However, when we think of it in quantum terms, the electrons are exchanging a virtual photon. Since energy and mass are interchangeable, part of the mass of one electron becomes energy- a virtual photon- which is then transferred to the other electron. Both the emission and absorption of a virtual photon creates acceleration within both electrons, so they are forced apart. So that’s it, that’s the central concept of quantum electrodynamics.
Feynman diagrams There are many trifling maths equations that were used to display the theory. However, the simplest representation of them, was Richard Feynman’s diagrammatical portrayal. These clearly depict two electrons, exchanging a massless, chargeless photon and recoiling. Feynman diagrams are not necessarily limited to the exchange of only one photon, but generally, the more meeting points there are between a photon and electron, and therefore, the more photons exchanged, the less common such an exchange becomes in quantum physics. Each vertice on a Feynman diagram, The simplest Feynman diagram. reduces the likeliness of the diagram occurring by 1/100. This is crucial to remember when interpreting these diagrams. Additionally, there are many possible ways in which photon exchange can become more complicated than what Feynman shows. For example, a matter anti-matter particle pair can be created from the photon, as its energy is converted to mass. Alternatively, a particle could exchange a virtual photon with itself, or emit one in the form of coloured light. Feynman’s diagrams do not describe what happens during every photon exchange, but they are very useful in explaining the most common ones. There are a lot of different concepts to understand, before we can truly delve into QED, but I hope that once all the puzzle pieces were revealed, it turned out to be far less complicated than its namesake suggests. It is the first of a series of theories which describe how forces can be explained on a quantum level and is incredibly useful for accurately analysing some quantum phenomena. Though theories such as this have other-worldly factors involved, such as virtual photons and particle-wave duality, much of the quantum realm will forever remain an alien concept to the human brain.
References: Feynman, R.P. (2001). Six easy pieces and Six not-so-easy pieces : essentials of physics explained by its most brilliant teacher. Cambridge, Mass.: Perseus Book Group. Howell, E. (2021). Einstein’s Theory of Special Relativity. [online] Space.com. Available at: https://www.space.com/36273theory-special-relativity.html [Accessed 14 Feb. 2022]. www.youtube.com. (2016). Quantum electrodynamics: theory. [online] Available at: https://www.youtube.com/watch?v=hHTWBc14-mk [Accessed 15 Feb. 2022]. www.youtube.com. (2017). Quantum Electrodynamics (QED). [online] Available at: https://www.youtube.com/watch?v=crfY2vzVMbI [Accessed 14 Feb. 2022].
Kat
In a “classical” reality (i.e. non-quantum), actions and experiences are able to be predicted or calculated. For example, if a ball is thrown up, the way in which it will come down is able to be predicted. However the quantum realm is based upon probabilities, therefore actions are less predictable. [1] One of the most famous explanations of quantum theory is the Copenhagen interpretation. One of the oldest interpretations of quantum mechanics (devised in the 1930s), was first introduced by the physicists, Niels Bohr and Werner Heisenberg. It speculates that measurements must be taken of an electron’s location within an atom in order to be meaningful. Therefore, before a measurement, the object is in a state of “superposition” – it exists in multiple quantum states at the same time. [2] [3] An example of quantum superposition is that there are many quantum features of electrons, such as spin. Spin is the intrinsic angular momentum of an electron, and is influenced by magnetic fields: it can either be in two states, called spin up and spin down. Before electrons are measured, it is possible that the electron is in either state, but we cannot be certain which one until it has been measured. [4] On the other hand, a flaw in the Copenhagen interpretation is that if physicists were to use real-world techniques (classical devices) to measure these electrons, it would gain physical characteristics and therefore have to then be a part of reality – it is not quantum anymore. A criticism of this theory was developed by Austrian physicist Erwin Schrödinger, with the infamous “Schrödinger’s Cat” thought experiment. It is based upon the idea of a cat being trapped inside of a box along with a vial of poison as well as a radioactive atom. As the box follows rules of quantum theory, at any given moment, the radioactive atom is able to either be in a state of decay or remain undecayed. As radioactive decay is unpredictable, this state of decay cannot be forecasted. When the atom does decay, it will break the hypothetical vial of poison in this experiment, so the cat is killed. [3]
An image of the set up of the Schrödinger’s Cat experiment: https://www.askamathematician.com/2 013/04/q-why-is-schrodingers-cat-bothdead-and-alive-is-this-not-a-paradox/
However, if the Copenhagen interpretation is applied to this experiment, a flaw is highlighted. Since no measurements have been taken yet, the atom must be in a superposition of either being decayed or undecayed. However, this rule must be applied to all entities in this box – including the cat. The cat must also be in a superposition of being dead or alive, which is illogical as these states cannot exist concurrently. [3] This thought experiment had sparked interest in quantum theory in the past and made it gain much more attention. More interpretations arose and became popular, such as the Many-Worlds Interpretation (MWI), proposed by American physicist Hugh Everett III in 1957. MWI offers the suggestion that there are multiple worlds which exist in parallel with each other, which branch of from each other by the nanosecond, however never linking or interacting with each other. [5]
Hugh Everett II, the person behind the MWI: https://space.mit.edu/home /tegmark/everett/
The MWI has gained supporters due to its viewpoint regarding the wave function. This is an expression to present information about a specific particle in all possible forms and locations. MWI states there is only one wave function for our whole Universe, and also that if something happens in our world, the other possible outcomes of that even do not just leave: they will be carried out in new worlds which are created. Therefore in some world or another, every possibility becomes a reality. This is a much simpler idea to understand than previous interpretations such as the Copenhagen interpretation; with the directness and clarity of this theory as well as the agreeable logic behind it, it proves to be rather popular. [5]
A simplistic diagram of how the MWI works: https://www.unrevealedfiles.com/what-is-manyworlds-interpretation/
A diagram of the setup of Wigner’s Friend Paradox: https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.1 033.1400&rep=rep1&type=pdf
Some more theories built on the Schrödinger’s Cat thought experiment. Eugene Wigner, a physicist and mathematician, added an extra person and room to the experiment in 1961.[6] [7] There is a person standing outside of the room containing the Schrödinger’s Cat experiment and their friend who is inside it watching the experiment. Whilst the room has not been entered by this person, they will not be sure of what to expect (the state of the cat and also the reaction of the friend are in a state of superposition). Therefore, this contradicts the Copenhagen interpretation as there are already two different realities for two different people: the friend seeing the cat which is definitely dead or alive, and the person outside the room, to which the state of the cat is uncertain. This shows how realities can contradict, which must be explained by quantum theory. [6] This was taken to the next level when in 2016, two theorists, Daniela Frauchiger and Renato Renner of the Swiss Federal Institute of Technology, developed it even further. [6] They doubled the setup, with two rooms with one person inside and outside of each one. One room contained the Schrödinger’s Cat thought experiment, whereas the other one has controls to select the property of a quantum particle. Room 1, the one containing the controls, has the person inside flip a biased coin, using the outcome to set the property of the quantum particle. This setting is then communicated to Room 2, containing the Schrödinger’s Cat experiment. This setting of the quantum particle determines whether the cat is in a living state or dead, which the person inside Room 2 will see in front of them. After these events have taken place, the people outside each room will be allowed to go in and see what is inside their own rooms (but not the other ones). [6]
A diagram of the setup of the Frauchiger-Renner paradox, Alice and Bob being the people outside the box and their friends inside: https://www.newscientist.com/article/mg24132220-100-schrodingers-kittens-newthought-experiment-breaks-quantum-theory/
The people who used to be outside each room now know for definite what happened inside the room (as they can have gone in), but using logic they can now also predict what has happened in the other room (i.e. if the cat died, the settings would reflect that). Their guess is not completely random but instead has some logic behind it, although it is not certain either. [6] This experiment is also flawed, as it has factors which could be affected by the fact there are multiple different people conducting the experiment: measurements must be consistent and everyone must see the same outcome from them, and the people must not lie about what they have seen. It has also assumed quantum theory always applies everywhere, which is not necessarily true. Due to these errors surrounding the theory of knowledge, statistically, different realities are seen times by the people walking into the rooms, which again poses a problem which quantum theory needs to explain. [6] On the other hand, technology has not advanced far enough yet for us to actually set up this quantum-based thought experiment. However, in order to simulate this experiment, Renner is working with colleagues on programming a classical computer to act as a quantum computer in order to essentially set up the experiment there in the computer itself, to set the reasoning in stone. [6] Overall, there is a lot of conflict over what interpretations and notions to follow when presented the complex problem of quantum theory and its mysteries. Which ones do you believe are right?
References [1]
https://www.newscientist.com/article/mg20927960-200-quantum-reality-the-many-meanings-of-life/ [2]
https://science.howstuffworks.com/innovation/science-questions/quantumsuicide4.htm#:~:text=The%20Copenhagen%20interpretation%20was%20first,its%20possible%20states%20at%20once [3]
https://www.newscientist.com/definition/schrodingers-cat/ [4] [5]
[6]
https://jqi.umd.edu/glossary/quantum-superposition
https://www.nature.com/articles/d41586-019-02602-8
https://www.newscientist.com/article/mg24132220-100-schrodingers-kittens-new-thought-experiment-breaks-quantumtheory/ [7]
https://www.technologyreview.com/2019/03/12/136684/a-quantum-experiment-suggests-theres-no-such-thing-asobjective-reality/#:~:text=Back%20in%201961%2C%20the%20Nobel,friend%E2%80%94to%20experience%20different%20realities
Trina
Mathematics of Curved Space Imagine you are an acrobat balancing on a tightrope. Alongside you, there is a flea which is also moving on the tightrope. You can move forwards and backwards along the rope however, because of its small size, the flea can move forwards and backwards as well as side to side. Therefore, if the flea continues to move on one side in the same direction, it will go around the rope and eventually come back to its original destination. Thus, the flea is able to move in two dimensions while humans are constrained due to size and able to move in one dimension. As human beings, we can only visualise three dimensions regardless that there may be more dimensions that are cannot be interpreted by the human eye.
The Flea and the Acrobat5
Riemannian geometry is a type of geometry designed to describe creatures that can move along curved surfaces or curved spaces but are not able to interpret higher dimensions. Objects were once confined to the flat and linear space of Euclidean geometry. Bernhard Riemann had proposed a newer and more abstract concept of space to describe curved surfaces and spaces which could have any possible dimensions6. Einstein, many years later,
5 6
TV Series Stranger Things - The Flea and the Acrobat https://www.britannica.com/science/Riemannian-geometry
was able to utilise this special branch of geometry and link Newtonian gravity with general relativity. This ultimately led to his famous theory of special relativity. The universe is described as a three-dimensional Euclidean space however, as one comes closer to black holes and dense stars like the neutron star, the space around becomes curved and bent. This is known as a minimal geodesic, in which the curve represents the shortest path possible between two points and is common to see these occur in a pair of points along the Universe’s surface. The space-based Hubble Telescope was able to locate more than one minimal geodesic between a pair of two points, often referred to gravitational lensing7. To estimate how much of an area of space is curved, theorems of Riemannian geometry and compared with measurements taken by space-based instruments are used. Many physicists use such Riemannian theorems to postulate their theories later on. Some physicists believe that the curvature of space is related to the gravitational field of a star according to a partial differential equation, known as Einstein’s field equation8. Thus, even without no practical applications at hand, these differential geometry theorems have paved way for physicists to postulate theories and with advancement of science and technology some of them have been eventually proved experimentally.
Gravitational Lensing in Action9
7 8
Gravitational lensing, ESA Hubble Telescope; https://esahubble.org/images/heic1106c/
Einstein’s Field Equations; https://encyclopediaofmath.org/index.php?title=Einstein_equations 9 Gravitational lensing; JPL/NASA; https://www.jpl.nasa.gov/images/pia23641-gravitational-lensing-graphic
String Theory and Hidden Dimensions String theory and other theories have required more than three dimensions, which are so small that we cannot see or interpret them. String theory, having been variously known as the superstring theory and the M-theory, is the idea that ordinary point particles in a quantum field, such as electrons, are replaced with one dimensional object called strings. These strings, which can either be open or closed, have characteristic tension and therefore presents a vibrational spectrum. These levels of vibration correspond to various particles, one of them being graviton10 which is a hypothetical particle responsible for gravitational interaction. Therefore, we can say that string theory is a theory of quantum gravity, but what does quantum gravity mean, another instance of theoretics in physics?
Gravitons in Action – Long Distance Force
Quantum Gravity Quantum gravity is a physical theory which incorporates the principles of general relativity with quantum theory. It is able to describe the microstructure of spacetime at a Planck scale. Quantum gravity is a field of theoretical physics that can describe how gravity works according the principles of quantum mechanics. Three of the four primary forces are described within the realm of quantum physics, the fourth, gravity, is based on Einstein’s general theory of relativity. However, if we describe the gravitational field of a black hole in this theory, space time curvature and physical quantities separate at the centre of the black hole. In string theory, one of the many vibrational states of the string corresponds to the graviton, a quantum mechanical particle that carries the gravitational force. Thus, string theory provides a framework to build models of quantum gravity. As such, string theory is a candidate for a theory of everything, a self-contained mathematical model for physicists that describes all fundamental forces and forms of matter.
10
Graviton, https://www.bbc.co.uk/programmes/p003k9ks
References: 1. Sci-Fi TV Series Stranger Things - The Flea and the Acrobat 2. Riemannian geometry; https://www.britannica.com/science/Riemannian-geometry 3. Gravitational lensing, ESA Hubble Telescope; https://esahubble.org/images/heic1106c/ 4. Einstein’s Field Equations; https://encyclopediaofmath.org/index.php?title=Einstein_equations 5. Gravitational lensing; JPL/NASA; https://www.jpl.nasa.gov/images/pia23641-gravitational-lensing-graphic 6. Graviton, https://www.bbc.co.uk/programmes/p003k9ks 7. Plank Scale; Symmetry Magazine; https://www.symmetrymagazine.org/article/the-planck-scale 8. Quantum gravity, https://www.bbc.co.uk/programmes/p00547c4
SCIENTIA: ED.3 THEORETICS