The Franklin - Notting Hill & Ealing High School's Science Dept Newsletter - Issue 3 (Spring 2021)

Page 1

THE FRANKLIN The Science Magazine of Notting Hill & Ealing High School ◆ Spring 2021 types of foods they eat with a number also compulsively exercising. Similarly, Anorexia athletica (sports anorexia) or hyper gymnasia, is characterised by an obsession with exercise to lose weight or prevent oneself from gaining weight. Individuals will exercise in extreme conditions for excessively long hours to the point it becomes a compulsive obligation without enjoyment or benefits.

How and why do animals experience similar mental illnesses to humans? By Nirupama Krishnakumaran, 9T

Disordered eating is also apparent in rats. When given free access to food and an exercise wheel, they develop a balanced routine between exercise and food intake. They also seem to suffer no significant change when given unrestricted food access and restricted wheel access or restricted access to both food and exercise, in which case they will adjust to the circumstances. However, rats begin to suffer when their food intake is restricted and their wheel access is not. At this time, they start to exercise more and eat less, resulting in extreme weight loss and eventually death. A study published in 2003 by Johannes Hebebrand suggests the running impedes the adjustment to the feeding schedule. Moreover, the semi-starvation effect has also been studied in primates with results revealing that Rhesus macaque males become hyperactive as a response to long-term chronic food restriction. In this case, active anorexia could be avoided by training rats to adjust to the feeding schedule and giving them unlimited access to a running wheel.

The topic of mental health and the variety of illnesses we can get have recently been spotlighted in many social media platforms and news outlets. The idea of humankind not being alone in our suffering is something that has been talked about greatly. But does this idea only apply to humans? The short answer is no; animals can suffer from mental disorders as well. With this being said, can animals become our saviours in the complex area of psychopathology? Animal psychopathology is the study of mental or behavioural disorders in animals. Multiple mental disorders apparent in animals are also prevalent in humans which can be split into two categories: eating and behavioural disorders. Eating disorders, in most cases, seem to have spared wild animals. However, the same cannot be said for domestic animals.

Thin sow syndrome is another eating disorder that is present in stalled sows (in farming this refers to sows that are kept in stalls in order to fatten them) and is similar to activity anorexia. After early pregnancy some sows become overly active, eating less and wasting away, often resulting in death. During this period, they suffer from emaciation (extreme thinness and weakness), hypothermia, a depraved appetite, restlessness and hyperactivity. It is thought that the cause of the syndrome is mainly social and environmental stress. Stress in this

Activity anorexia is a disorder specifically affecting rats. It occurs when they exercise excessively while simultaneously lowering their food intake. The condition is similar to human conditions like anorexia nervosa or hyper gymnasia. According to the National Eating Disorders Association “anorexia nervosa is an eating disorder characterised by weight loss; difficulties maintaining an appropriate body weight for height, age, and stature; and, in many individuals, distorted body image”. People with anorexia generally restrict the number of calories and

1


Obsessive compulsive disorder (OCD) is not only present in humans, like the majority of these disorders, but it is also present in animals. In animals it is also referred to as “stereotypy” or “stereotypical behaviour”. It is defined as “a specific, unnecessary action (or series of actions) repeated more often than would normally be expected”. The question of whether animals can obsess in the same way as humans is not currently answered and the reason, or motivation for such compulsive acts, in animals other than humans, is unknown. Some behaviours that can be considered as signs of OCD are often beneficial and part of the animal’s life; for example, grooming in rats. Grooming in rats is how rats keep themselves clean without owners and, if they are domestic animals, involves bathing them. However, it is thought that adverse obsessive-compulsive behaviours are normal behavioural conditions that have become extreme. Cat owners often suspect their cats of having OCD and it seems that their suspicions can be true. As stated by PetMD, signs that a cat has OCD are excessive grooming, pacing, and meowing which indicates that the cat might be bored, anxious or in pain.

particular animal is understood to be the consequence of restrained animals in intensive production units with the most restrained sows, those that are pregnant or lactating. This means they have little room to move around as they are kept in bolted gestation crates or tethered for 16 weeks (the pregnancy stage), preventing them from natural and social behaviours. Restraint is not the only cause of the syndrome as too much freedom and movement can also cause stress for adult sows. This happens most commonly after weaning where violent behaviour is displayed when they are placed into groups with the superior sow emerging to eat ravenously. According to the book “Animal Models - Disorders of Eating Behaviour and Body Composition” the superior sow is also suspected of bullying two inferior sows that are members of a group that regularly and actively avoid competitive feeding. These sows tend to have poor appetite but surprisingly show pica (a disorder in which an animal or human eats objects not considered as food), polydipsia (excessive water intake) and anaemia.

Addiction can also be seen in animals but with different substances to drugs. In particular, sugar is considered incredibly addictive as eating sugary foods causes the brain to release opioids, natural chemicals, and dopamine in the limbic system (a set of structures in the brain that deal with emotions and memory). The brain recognises the intense pleasure that is derived from the dopamine and opioids released and it learns to crave more sugar. Dependence is therefore created through these natural rewards with the sugars stimulating opioids and dopamine to be released into the synapses of the mesolimbic system, otherwise known as the reward pathway. This is particularly apparent in tests on rats in which the hippocampus, the insula, and the caudate is activated when they crave sugar. These areas in the brain are the same that activate when drug addicts crave drugs. Once the body becomes dependent on the sugar, and the nervous system goes through a change, somatic signs of withdrawal start to appear. The signs are chattering teeth, forepaw tremors and head shakes which occurs when sugar is not ingested.

In the 1940s, studies were conducted on the consequences of overcrowding in which pregnant Norwegian rats were placed in a room with more than enough water and food. They were observed for their rate of population growth which initially increased but abruptly stopped when the population reached a certain number. The overcrowding produced stress and psychopathologies and, although they were provided with food and water, the rats had stopped eating and reproducing. Overcrowding seems to have caused an effect on dense populations of beetles as well. Female beetles destroy their eggs and turn cannibalistic while male beetles lose interest in the females and even when there is plenty of food and water; there is no population growth. The biggest factor of this occurrence is all due to environmental stress.

Depression is sadly another mental disorder that has not spared animals. During the 1960s, American psychologist Martin Seligman carried out an experiment on dogs. He and his colleagues had pioneered the study of depression and learned helplessness. The dogs were separated into

Due to the long-standing communication barrier it is hard for researchers to monitor the animals behaviour, making it difficult for researchers to find out more about behavioural disorders. Discovering more about certain disorders such as OCD can open a gateway into cures for both animals and humans. Many condition’s animals have, humans also have so inquiring into both humans and animals can rapidly progress our understanding of behavioural disorders. 2


pacing, rocking back and forth, and drinking urine. Stress, overexcitement and boredom can also cause captive birds to fall prey to the demons of anxiety and depression. Signs for them include over-preening and feather plucking, in humans it is similar to trichotillomania. According to the NHS website “trichotillomania, also known as ‘Trich’, is when someone cannot resist the urge to pull out their hair.” Those were just a few of the mental disorder’s animals can have. The causes for disorders like PTSD, depression, anxiety etc. are said to be loss of family (or companions) and freedom, stress, abuse or trauma which are extremely similar to those of humans. The majority of those that suffer from these illnesses are those that are in captivity. The loss of a family member or a companion significantly affects these animals while the loss of freedom can take away who they are. Without the ability to make decisions for themselves and live like others, they can feel trapped. Stress, abuse and trauma are consequences that come with captivity.

three groups, one of which had no control over when they were being shocked. After they were electrocuted, they were tested to see if they could escape the shock by jumping over a partition. All the dogs were immobilised with the paralysing drug curare and results showed that all the groups other than the no control group jumped over the partition. The dogs with no control believed that their efforts would not affect the result whatever they did and they would still be shocked. The theory then emerged that the behaviour of the animals was the result of the effects of the shock. The effects were a stressor so extreme that they depleted a neurochemical needed by the animals for movement. Similar to humans, when the animals became accustomed to believing that nothing they could do would change their circumstances, they developed depression.

In terms of scientific research (apart from considering the ethics), animals are extremely useful test subjects. Learning about their brain patterns, why they get these syndromes and whether their conditions are identical to the one’s humans are susceptible to like depression or anxiety can open an infinite number of possibilities for treatments. They can help us understand why we are so susceptible to these ordeals. But it is important to remember that we should be helping these animals as well. As much as these experiments can get us closer to the answers of many questions, they may be harming the animals. As putting these animals through distressing events can permanently traumatise them. I hope to see successful treatments in the future for as many species as possible, not just ours. The anthropocentric attitudes we have now must be dropped and It must be clear: we are not alone in our suffering.

PTSD (post-traumatic stress disorder) has affected many animals. For example, some chimpanzees who lost their mothers in zoos were noted to starve themselves while yearning for their guardian which resulted in death. Furthermore, one of them lay down in the spot their mother had died and died there as well due to the trauma they experienced. Orcas that are captive are very well known for their tendency to kill humans. PTSD is said to be the cause. They suffer from the disorder as a result of severe training methods that are intended to break them off from their normal habits and, after these inhumane practices, they are left in tiny confined spaces. They have very few mates around them unlike they would in the wild and they have lost the freedom that they had in the wild. These extreme limitations consequently affect them with the stress and depression turning to aggression and other concerning habits. Anxiety can affect animals such as captive chimps and birds. Lab monkeys have more than occasionally shown strange and compulsive behaviours. Zoo monkeys also show such behaviour. With research, it was discovered that the anxiety that comes with living in captivity and mental illnesses in chimpanzees. Common signs of anxiety in chimps are hitting their own bodies, constant

3


diagnosing a disease and is based on a questionnaire system in which it would analyse the patient's history and symptoms to provide a possible cause or treatment. IBM Watson is used based on a questionnaire and responds according to disease like x-ray reading. Furthermore, AI allows those in training to go through naturalistic simulations and can use other trainees' results to constantly improve these programs. Beyond scanning health records to help providers identify chronically ill individuals who may be at risk of adverse episodes, AI can help clinicians take a comprehensive approach for disease management, better care plans and helps patients manage as well as complying with their long-term treatment programme. Another example where it is used in airports for thermal imaging; this is more common now due to COVID-19 as you can check for any abnormal temperature within a crowd which has benefited the monitoring system to protect people from catching diseases like COVID-19.

AI and the Future it Holds by Nanthana Lathan, 9G

Artificial intelligence, commonly known as AI, is intelligence demonstrated by machines. This is different from human intelligence as humans involve emotions we do not have the capability to feel or have a subconscious. Therefore, AI has many purposes such as improving decision making in health care and being more accurate because AI is able to recognize patterns and make predictive analytics. This implies that AI can use their database to identify if a person is in danger and can provide treatments taking in the individual’s needs. There are many examples of AI such as web searchers, email communication and online shopping. It is also used in parts of cars and other public administration and services. Many people do not realise how humans are interacting with AI on a daily basis since it is always referred to as robots. Therefore, many people have misconceptions about the idea and the future it holds.

AI is also heavily being used in space for tasks like mission planning. Planning a mission to Mars is not simple so scientists utilise AI to examine and prepare for missions using data they have already created. By the help of technology, it allows quick access to information and creates a data spread to spot trends which may be necessary for future missions. Another example is that it is used as astronauts’ assistants; robots are not yet able to go to space however they create intelligent assistance. This is beneficial when exploring space because AI can potentially detect dangers such as changes to the spacecraft, for example increased carbon dioxide or sensing harmful malfunctions would allow the crew to correct these dangers. These few examples demonstrate the importance of AI as it can identify many problems in situations which alerts us to act upon it.

AI was created by Herbert Simon and Allen Newell in December 1955 who developed the Logic Theorist which is known to be the first program deliberately engineered to perform automated reasoning. The Logic Theorist was a computer that could prove theorems in symbolic logic from Whitehead and Russell’s Principia Mathematica which was one of the first working programs that simulated some aspect of a human's ability to solve complex problems.

Advantages and disadvantages of AI: Advantages of AI systems which are combined with robotics include the very low error rate compared to humans when interacting with complex systems that require high skills such as quickly solving problems. If programmed correctly AI-aided robots have great accuracy and can speed up many processes. Alongside this factor AI do not get affected by their environment so are able to complete dangerous tasks that could be harmful to humans. Moreover, robots are able to perform repetitive tasks quicker than humans as they don’t require sleep, breaks, food or human interaction. Furthermore, AI has not been programmed to feel emotions which affects their decision-making skills. This could be seen as an advantage as well as a disadvantage due to not being able to analyse the ethical factor of certain situations.

The development of AI has led to robots being used in many areas of medicine in the past 30 years, from laboratories to aiding humans in surgical procedures. Robots have been used in medicine for 30 years from a range of being used in the laboratory to highly surgical robots that can aid humans when performing surgeries or execute operations by themselves. Therefore, AI has been useful in many ways in the healthcare sector such as helping to diagnose the diseases of the patient with minimal mistakes in the process. It ensures an accurate description of the data of a patient so proceeding actions like (come up with some treatments like chemotherapy) can be administered to the patient as quickly as possible. In addition, AI has the ability to analyse large quantities of data which can help discover patterns and trends, leading to new discoveries. IBM Watson is used to assist in

Regarding the disadvantages, AI can be very expensive and break down which can create ethical and financial dilemmas. As previously stated, AI are not programmed with emotions so do not have the creative capability of humans. This could lead to many ethical dilemmas including AI being able to get access to personal information without the host knowing or not having the 4


emotions to process situations in an unbiased manner. Therefore, AI has some disadvantages but mostly in the moral and ethics sector of this topic.

economy due to less people being able to find jobs. On the other hand, by eliminating the tedious jobs AI can free humans to pursue their aspirations while opening opportunities in the process. AI can take over jobs that include a high amount of organisation like a business development manager, data detective and ethical sourcing officer. Study shows that in the UK 63% of the adult population are uncomfortable with allowing personal data to be used to improve healthcare and are unfavourable with AI systems replacing doctors and nurses.

What do scientists think of the advancement of AI? Stephen Hawking stated that ‘The development of full artificial intelligence could spell the end of the human race (Hawking,2014)’. He believed that new advances are amazing and helpful but fears the consequences of creating something that can match or surpass human intelligence. This suggests that AI has the potential to outperform humans in most tasks which could create reliance consequently causing under employment or other harms to humanity. He also believed they could become independent and overpower humans in many sectors, such as the marketing and HRM (human resource management). Stephen Hawking was not the only person to fear the advancement of AI. Elon Musk has also warned that AI is our ‘biggest existential threat’ (Musk,2014).

There are concerns appearing due to the easy access of people’s data as AI could access a person’s interest, credit histories and more. This links with how AI and robots can exploit this information on their own and accidentally distribute confidential information. However, these issues could be solved by enforcing tighter security and eliminating the possibility of security breaches as having AI access certain information (like medical records) always has a drawback when it comes to security and access.

On the other hand, Rollo Carpenter, the creator of Cleverbot, a program in which the computer talks to you as if it was human, stated that we are a long way from having the computer achieve its full artificial intelligence. However, he believes that we can achieve this stage in a few decades. Therefore, his opinion on AI is useful and suggests it will be a positive force for the future. While these are some explicit examples, many scientists have split views on AI due to its power that could be exploited or used beneficially in different situations.

AI has a huge potential and is unbelievably smart therefore if it falls into the wrong hands it can be used to threaten countries or anything of that sort. The biggest concern involves AI being used to carry out physical attacks on humans, such as hacking into self-driving cars and causing collisions. AI can be programmed in different ways which can be used immorally like manipulating the public by delivering fake news. This can have a huge impact on society as it can disrupt layers of society from politics to media and cause disputes between people. However, this can be avoided by trusting AI in the right hands to be used beneficially although this can be difficult to prevent in the long term as some humans may use this power of understanding AI to their advantage. There are many thoughts on how the future of AI will be like and this could mean that it impacts every industry and human beings. Every company would use some sort of industrialised AI to help aid research and work being done as well as allowing more accurate and precise trends to be produced from the data. I believe the use of AI could enhance our society and will open new chapters to science itself. Although we should be careful of exploitation with knowledge obtained by AI. However, I believe that (despite privacy concerns), AI is an incredible method to limit repetitive work and mistakes and help advance a wealth of different areas of knowledge.

Having mentioned the pros and cons of AI, there are many dilemmas alongside this topic which remain. Do workplace robots pose a threat to the employment rate? These are the most common questions asked amongst people who are keeping up with AI. Some experts fear that robots will destroy more jobs then create problems to the future generation. Research shows that currently 71% of total task hours are completed by humans whilst 29% of the rest are done by machines (Choudhry.S,2018 CNBC). By analysing the advancement of AI scientists predicts that if current trends continue, in just 4 years the average will shift to 58% completed by humans and 42% by machines. This shows how by the rate AI is advancing many jobs that can be simply done by machines such as marketing managers, software developers, human resources managers and sales managers alongside many other jobs that require heavy computer work. Having an increase of AI in most workplaces causes a decrease in unemployment rate which means that it could affect the 5


Memory: An Introduction to Memory-Related Diseases and How to Prevent Them

Katherine Marriott, 9G Where is memory stored and what is it? Memory is a record of personal experiences and is often regarded as “cognitive psychology” by scientists. Roughly 100 years ago Karl Lashley, an American psychologist, began exploring the idea of having multiple parts of your brain storing memory. People generally had a positive reaction to this, as the topic was not widely researched. He researched this topic by creating lesions in rodents’ and monkeys’ brains and putting them in mazes to find the way out. At first, he taught the rats how to get through the maze without the lesions, and then one by one started making incisions with a soldering iron, as it was the only available tool at the time, in the prefrontal cortex. To Lashley’s surprise, the rats were still able to make their way out of the maze and, even though he did not find the engram, he was able to conclude that the principle of “mass action” was existent. This implied that the memory was not controlled by one particular part of the brain, but multiple; the prefrontal cortex, the amygdala, the hippocampus, and the cerebellum play different roles. The prefrontal cortex and the cerebellum play a big part in body movements as it is always busy planning to adjust your limbs and eyes. The amygdala controls the emotions and emotional responses, and motivation while the hippocampus helps memory recall and certain behaviours. Memory is divided into a number of categories.

Despite no cure, there are certain treatments that have been proven to help. For milder cases of Alzheimer’s, dosages of rivastigmine (Exelon) can be ingested (as capsules or liquid) and can be applied as a patch on the skin. This drug triggers brain production of a chemical called acetylcholine, which is responsible for allowing the nerve cells to communicate. For more severe cases of Alzheimer’s and Lewy body dementia, a medicine called Donepezil is used. It does a similar thing, but It this is done by preventing the action of acetylcholinesterase, the compound that normally breaks nerve communication down.

Conscious memory is made up of episodic memory (events that have happened to you) and semantic memory (general knowledge about the world). Subconscious memory is made up of procedural memory, motor memory, and priming.

How to improve your memory: There are many things that could help you improve your memory, some examples are:

All these factors affect our day-to-day life in different ways; for example, motor memory influences your movement and reactions like removing your hand after touching a hot object while semantic memory tells you not to touch the hot item in the first place. These decisions occur at a subconscious level and each one helps make us who we are and keep us safe.

Even though memory is amazing, there are some problems which have developed over time. Some examples of these are: ●

Vascular dementia - This is caused by the shortage of blood flow in the brain and is the second most common form of dementia. Frontotemporal dementia - Death of the nerve cells near the front of the brain, resulting in shrinking nerves. Amnesia - Generally associated with short and long-term memory and is typically caused by trauma, brain tumours, and even a collection of fluid in the brain (hydrocephalus). Stroke - blood clot in the brain.

Alzheimer's disease - Progressive disease caused by damaged or tangled nerves which gradually lose connection over time. Lewy body dementia - Caused by the breakdown of brain tissue, which causes abnormal protein deposits to form, consequently giving symptoms of dementia. 6

Cardio - Exercise, particularly cardiovascular exercise improves memory as it increases blood flow to the hippocampus which is crucial for learning and memory recall. A good snack - If there is enough glucose in your snack, it is proven to help consolidation and turn short term memory into long term, affecting the hippocampus. Socialising - A 2017 study showed that adults around the age of 80 who had good relationships, had similar cognitive function and ability to people in their 50s or 60s. Some scientists even go as far as saying that friends create “social pressure” which forces us to take care of ourselves, and if need be, it would be told if there is a problem.


Reducing stress - Increased levels of anxiety can actually promote cortisol, which helps with the formation of long-term memories. Cortisol also links to the fight or flight response, as it curbs non- essential functions to better your performance. At the same time, be wary as too much stress can impair memory. Vitamin E - a study done in 2004 showed that it has antioxidant properties, and a study in 1997 showed that it delayed dementia milestones.

In conclusion, memory is a difficult and complicated yet interesting topic with many things left to be discovered. Personally, I think that not enough research is put into the general topic of memory as there are many incurable diseases affecting people’s daily lives.

working self-sufficiently, independent of Earth. Although the idea of finding life on Mars is intriguing, there are other factors which propel governments and individuals to heavily invest into the possibility of the colonization of Mars. Also, such advances in space discovery can be applied in other fields of study on Earth. One example is a computer algorithm, developed by NASA in 1993, with the goal to better extract information from low-resolution images from the Hubble Space Telescope, which was successfully used to accurately detect early stages of cancer. However, political and economic ramifications of Mars colonisation are significant and may cause countries to compete for control in space. Irrespective of similarities between Mars and Earth such as moons, polar ice caps, liquid water under the surface and a day only 37 minutes longer than that on Earth, there are many significant differences. As the atmosphere of Mars is only 1% as dense as Earth’s, comprising 95% carbon dioxide and less than 1% oxygen, not only can people not breathe, but there is an average temperature of -60oC and little protection from cosmic rays which drastically increases the health risks. Hence, in order to combat this problem scientists on Mars would have to live in capsules covered in a layer of frozen CO2, which can be sourced from the atmosphere, and a mound of rocks on top to reduce the effects of radiation on crew members. The cabins would have to be pressurised and filled with an artificial atmosphere, being supplied with oxygen and nitrogen, as well as having circular capsules in order to reduce the stress of differences in pressure from the inside and outside of the cabin. Due to the large amounts of radiation from the sun, for most of the astronauts' time on Mars, they would have to remain inside, using robots outside to complete various tasks. However, prolonged periods of time on Mars may cause malfunctions in the

Mars - our new Home? By Daria Gal, 9G The universe has always been a source of fascination for humans, who, over the centuries, have tried to learn more about who or what lies beyond the Earth. However, in the past century, space explorations were mostly focused on short-term projects as opposed to long-lasting programs that are being developed nowadays. Some countries are currently considering the colonization of Mars, where there would be a permanent living base for mankind, 7


source of electricity since it converts heat emitted from radioactive decay into electricity. In addition, there are many measures that the astronauts will have to take into account in order to remain as healthy as possible while on Mars. Regular exercise would become a priority due to the low gravity on Mars which is approximately three times less than that on Earth and can cause muscle wastage, bone loss, osteoporosis and cardiovascular problems. As a result of the undue psychological strain of confined space for extended periods of time and the possible trauma, astronauts will have to undergo intense psychological screening before accepting the mission.

robots, as Mars dust is much finer that dust on Earth and may become lodged in the gears, reducing the effectiveness of the machinery. Moreover, the dust is electrostatically charged and dry so can easily stick to space suits if people were to venture outside. Since the soil is toxic (containing high concentrations of perchlorate compounds) the dust could cause lethal exposure if introduced into the cabin.

Despite the gleaming opportunities and discoveries that surround Mars exploration and colonization, many feel that we should not begin to disregard the problems on Earth. Increased investments to space exploration limit finances that should be directed to preserving and improving life on Earth. However, I believe that mankind should work together, combining resources and expertise to overcome challenges and extend the frontiers of understanding space.

The ambitious plans to colonize Mars by 2050 still poses a plethora of challenges. There would be many essential features for the survival of astronauts. A problem society would have to combat is a supply of food on Mars since Martian soil has high levels of chlorine, alkaline soil and lacks the vital nitrogen. Crops grown on such soil would result in a significant decline of chlorophyll content in plant leaves, reducing the size of the plant both above and below ground. In addition, the accumulation of concentrated perchlorates in the leaves, makes it harmful for humans to ingest. Therefore, many scientists are considering the option to first decontaminate the soil, which although is a tedious, energy-intensive and expensive process, once fertilised with biological waste could be used in self-sufficient greenhouses on Mars to provide astronauts with fresh food. While water is still a significant problem for astronauts on Mars, on a short-term basis, it could be collected from the ice caps at the poles and filtered, making it safe to drink (although this is still not a sustainable and long-term solution). Furthermore, the amount of energy needed to carry out many of the tasks on Mars is significant. While solar energy may sometimes be efficient on Mars the frequent dust storms, which can obscure sunlight for several days, reduce the effectiveness of this energy source. Instead, many organisations, such as NASA, are considering nuclear power as an optimal source of energy and aims to implement radioisotope thermoelectric generators as a

8


Hypnosis: can we control someone’s mind? By Aisha Mahmood, 9G We have all heard about people using hypnosis, and undoubtedly seen it many times in the media. But can we control someone’s mind and how does hypnosis work? First of all, we need to understand what hypnosis is: Hypnosis is a condition (state of mind) which makes people more susceptible to suggestions. It hyper focuses a person’s attention into a trance so they ignore everything around them.

Hypnosis in Nature: Hypnosis can be found naturally. Some animals “hypnotise” their prey (in order to eventually consume them). There is a species of fish, called the Broad fish Cuttlefish, which is well known to hypnotise its prey 4. It flashes colours at its target, and this induces a trance on the victim. It is an incredibly unique and complex process.

Hypnosis in humans: Can a human be hypnotised? Technically, yes, but not in the way most would think. Humans can be influenced to do certain things but it is not always 100% effective and the participant being hypnotised has to be willing. Essentially, a hypnotherapist can confuse someone's brain into being more likely to do an action.

Gene editing technology (CRISPR) and hypnosis: Could a genome editor take an organism's hypnosis gene and put it into a human? In the future, it is possible that a genome editor could be able to replicate an organism's hypnosis gene and put it into a human. With recent advances with the gene editing technology, CRISPR 5, this possibility seems nearby. As well as this, only 5% of the ocean has been explored, this leaves plenty of opportunity to find an effective hypnotism gene.

What happens to the brain during hypnosis? When being hypnotised, the activity levels change in some areas of the brain. There is a decrease in activity in part of the salience network, called the dorsal anterior cingulate 1. An increase in connections between the dorsolateral prefrontal cortex and the insula 2. A decrease in connections between the dorsolateral prefrontal cortex and the default mode network, which includes the medial prefrontal and posterior cingulate cortex 3. To summarise this, this change represents a disconnect between someone’s actions and the awareness of their actions; it focuses the brain on what the hypnotherapist is saying.

Alternative ways to change a human’s brain patterns Electroconvulsive therapy Electroconvulsive therapy is a treatment that involves sending an electric current through a person’s brain, causing a brief surge of electrical activity within their brain. The aim of the treatment is to relieve the symptoms of certain mental health problems. It is also more commonly known as electroshock therapy. Usually, someone would resort to this treatment when no other type of therapy has worked. Some side effects include: nausea, headaches, low blood pressure and memory loss (ranging from minutes to hours). According to research by BMJ, of 35 studies, 20 considered memory loss as a consequence of electroconvulsive therapy. There is no conclusive evidence, however, that Electroconvulsive therapy leads to structural brain damage.

Where is hypnosis used? Hypnosis can be used as a tool in therapy, or simply for enjoyment. In therapy, it clears the mind for deeper processing and acceptance. Moreover, hypnosis can also encourage humans to change their habits and behavioural patterns; some may use it as a form of entertainment, i.e. public performances. How does a hypnotherapist hypnotise someone? A hypnotherapist usually takes their patient into a quiet and comfortable room. Then, they ask their patient questions about their background and what they want to achieve from being hypnotised. Then, using repetition and a low, soothing voice, they will ask the patient to take deep breaths and have them focus their gaze on an object. There are many other methods for hypnotising a person. One is called a ‘hypnosis staircase’, where each step someone goes down, they inhale and exhale. Ultimately, with all of these techniques combined, the patient becomes relaxed and clears their mind.

Microchips Artificial Based Intelligence could be used to supplement neural activity. There is another type of microchip which is the work of scientists at the Defence Advanced Research Projects Agency, a branch of the US Department of Defence which develops new technologies for the military. This chip is said to change the mood of the person it is in. Researchers from the University of California (UC) and 4 This link will take you to a video of the cuttlefish hypnotising its prey: https://www.youtube.com/watch?v=l1T4ZgkCuiM 5 CRISPR stands for Clustered Regularly Interspaced Short Palindromic Repeats. It is a gene editor able to find specific parts of DNA inside a cell; scientists are able to edit this gene using CRISPR. However, CRISPR has been adapted to be able to do other things like turning genes on or off without altering their sequence. It is not able to put an animal's gene into a human, but this could be plausible in the future.

9


Massachusetts General Hospital (MGH) designed the chips to use artificial intelligence algorithms that detect patterns of activity associated with mood disorders. Once detected, they shock a patient's brain back into a healthy state automatically. Although, the reliability of these microchips may be an issue in the future.

These properties allow for different methods of computation using quantum logic gates, resulting in what are referred to as quantum algorithms. Quantum algorithms can solve some tricky problems much faster than classical algorithms and even make some previously unsolvable problems solvable. Although not every problem can be solved by quantum methods, the qubit is able to take the form of a classical bit, which allows a quantum computer to solve the same problems that a classical computer can. As a result, quantum computing is often referred to as the true nature of computing.

In conclusion, the human brain is an extremely complex organ and, in the future, more research will go into studying how it works and developing an understanding of one’s mind. There are many possibilities of what technological advances could happen and no one knows what will be achieved in the future. Despite being somewhat primitive, we can technically control someone’s brain, however, it will take time and more advanced technology to develop and utilise this skill.

Measurements: One interesting analogy of taking quantum measurements and how they defy expectations set by classical mechanics is the quantum clock. Imagine there is a clock that you cannot see; you are allowed to ask the clock if the hour hand is pointing to a specific time e.g. 3 o’clock. If the clock were normal, it would most likely tell you that the hand is not pointing to 3 o’clock. This is due to the many other possible directions the hand could be pointing in; the likelihood that it is pointing to 3 o’clock is very low. However, the strange thing about the quantum clock is that it will tell you either yes, the hand is pointing to 3 o’clock or no it is pointing opposite of 3 o’clock - 9 o’clock. Taking a measurement of a quantum particle will always result in one of two answers and these answers depend on the measurement being taken. If you ask the clock if the hand is pointing to 12 o’clock then the answers are yes it is, or no it is pointing to 6 o’clock.

How is Quantum Mechanics used in quantum Computing? Sophie Claxton, 12KO Computing is the manipulation of data. In classical computing, this data is stored in bits (short for binary digits) which is the smallest unit of data and can take one of two discrete values, either 0 or 1, corresponding to the electrical values of on and off respectively. In terms of computation, individual bits are often used to represent the Boolean values: True (1) and False (0). This allows Boolean operations to be performed on the bits by sending them through logic gates, such as AND, OR and NOT gates. The important fact about a bit in classical computing is that it has only one value; it is either 1 or 0 and therefore no matter how many times it is measured, there will only be one result of - 1 or 0.

One experiment that demonstrates this is the Stern-Gerlach experiment, involving silver atoms, was conducted in 1922. Silver atoms behave like small bar magnets with a north and south pole (as they have an odd number of electrons). The experiment involves sending a stream of silver atoms through a pair of magnets (one above the other) and observing if atoms are deflected upwards or downwards by looking where they landed on a screen. What was expected was a continuous line of silver atoms on the screen from a maximum to a minimum point (from the idea that atoms with magnetic axes in different directions would be deflected by different amounts). However, this was not the case. Instead, it was observed that atoms were deflected to only two points on the screen - the maximum and minimum. This means when the atoms are measured, they act as if their magnetic axes are vertically aligned. A similar result is obtained if the apparatus is rotated 90°; the atoms act as if their magnetic axes are horizontally aligned.

In quantum computing the smallest unit is the qubit (short for quantum bit). A qubit can be either 1 or 0 just like a classical bit. However, it can also be both 1 and 0 at the same time due to a condition known as superposition. When a qubit is in superposition, the act of measuring the qubit will make it change state to either 1 or 0 which fundamentally changes the qubit. Furthermore, if you measured 100 qubits one after the other in the same superposition then you will observe one of two different results for each - some qubits in one state and the others in another. The state a qubit changes to when measured is governed by a fixed probability which in turn is defined by the superposition of the qubit. The last difference between classical bits and qubits is that qubits can be entangled which is a phenomenon known as entanglement.

An important aspect of measuring quantum particles is that the act of measuring changes the state of the particle. Returning to the quantum clock analogy, by asking if the 10


hand is pointing to 3 o’clock the hand changes so that it is pointing to 3 o’clock or 9 o’clock. We know this is true because if we ask the clock repeatedly if the hand is pointing to 3 o’clock, the answer will always be yes or always be no, it is pointing to 9 o’clock.

so that when the calculation is performed and the qubit is measured, it will be a useful result 100% of the time. Entanglement: Quantum entanglement is a phenomenon where the state of one particle is intrinsically linked to the state of other particles. That is to say the states of the particles cannot be described independently of each other. Entanglement is a strange concept to the point where Einstein himself would refer to it as “spooky action at a distance”.

Furthermore, if we had 10 clocks and we asked each if they were pointing to 3 o’clock and each one was, and then we asked each clock if the hand was pointing to 12 o’clock, 5 out of the 10 clocks would answer yes and the other 5 would answer no, the hand is pointing to 6 o’clock. In other words, there is a 50% chance that a clock observed to be pointing at 3 o’clock, when then asked if it is pointing at 6 o’clock, will say yes.

The act of measuring an entangled particle destroys the entanglement. Using the example of Alice and Bob who have been given a qubit. If Alice and Bob’s qubits are not entangled, they could be described separately and there would be no effect on Bob if Alice measured her qubit. However, if their qubits were entangled, the states of their qubits would be described together and, if Alice were to measure her qubit, there would be an effect on Bob’s qubit which changes to one of two new superpositions. The strangeness occurs from the fact that Alice’s actions affect Bob’s qubit instantaneously no matter how far Alice and Bob and their qubits were from each other.

Writing Qubits and Superposition: Qubits, by definition, are unit vectors written mathematically as a superposition of two basis vectors in the form a₁𝑏₁̂ + a₂𝑏₂̂, where 𝑏₁̂ and 𝑏₂̂ are the basis vectors and a₁ and a₂ are the probability amplitudes6. The basis vectors refer to how the qubit will be measured, and the square of probability amplitudes refers to the probability that, when measured, the qubit will jump to be in the state of that basis vector. As (a₁)² and (a₂)² are probabilities, they must total to 1 which is also why the qubit is referred to as unit vector.

However, some physicists, including Albert Einstein, were unsatisfied with this model (Copenhagen model). Einstein’s view centred around local realism; the idea that a particle can only be influenced by something in its vicinity. Consequently he believed, as many others did, there must be hidden variables to explain the apparently instantaneous sharing of information between entangled particles that contradicted his theory of relativity. In 1964, John Bell wrote a paper in which he described his famous inequality showing the theory of hidden variables could not reproduce all predictions of quantum mechanics in particular circumstances. Not only was Bell’s inequality ground-breaking, it was testable: The first experiment was conducted in 1972 but it wasn’t until 2015 that results were produced with high statistical significance and no loopholes.

Returning once more to the clock analogy, we could describe the clock being in the state:

meaning that if we were to ask the clock if the hour hand was pointing to 3 o’clock there would be a probability that the clock would answer yes. If we were to describe the clock as being in the state:

And again, we asked it if the hour hand was pointing to 3 o’clock, this time there would be a the clock would answer yes.

The entanglement of qubits is an essential part of quantum computing. As a result, in many quantum algorithms certain quantum gates are used to entangle qubits together before other operations are performed. This allows for more complex computations as operations are carried out on the entangled pair instead of individual qubits. However, before the qubits are measured, they must be unentangled and sent through logic gates to manipulate each qubit’s superposition to make certain measured results.

probability that

Finally, if we were to describe the clock as being in the state: When asked if the hour hand was pointing to 3 o’clock, we got yes with 100% certainty.

Quantum mechanics is fundamentally very different to classical mechanics and in many ways counterintuitive. Interactions with quantum mechanics is also completely unlike classical mechanics as the act of taking a measurement has a huge effect on the subject of the measurement as well as the probabilistic nature of the results - a big departure from classical mechanics where everything is deterministic. The idea that a particle is in a superposition of two states and jumps into states when

This last statement is important because when qubits are measured it is necessary, they are in a state whereby the result of the measurement is certain. Doing lots of calculations is pointless if the result received is partly down to chance. Therefore, in quantum computing quantum gates are used to manipulate the superposition of qubits through linear transformations and entanglement 6 Complex number used to describe the behaviour of systems

11


displaced and disproved). The law was created during the 17th century and stated that every particle attracts every other particle in the universe with a force that is directly proportional to the product of their masses. However, the theory of universal gravity has since been superseded by Einstein’s theory of General relativity and disproved by the study of black holes. The star S0-2 and its nearest black hole, Sagittarius A*, were observed by scientists in order to test Newton’s theory and produced results that disproved it (Tanzim Pardiwalla, 2019). Since then, Einstein’s improvement on this law has been largely accepted as the current description of gravity in physics and states that the observed gravitational effect between masses results from their wrapping of spacetime (Wikipedia Contributors, 2021). However, further research into black holes suggests that Einstein's theory could also be incorrect as it does not explain how gravity works in and around a black hole as light and matter cannot escape due to the strong gravitational fields (Tanzim Pardiwalla, 2019). Despite being an outdated theory, Newton's law of gravity is still taught today before Einstein’s. Consequently, our understanding of gravity originates from an incorrect theory and the current description of it is still theoretical, creating the possibility that our understanding of gravity is completely incorrect.

measured has no equivalent in classical mechanics, nor does entanglement. The ability to utilise quantum mechanics in computing opens up many possibilities but quantum computing is still very much in its infancy. Quantum computers are likely to be far more powerful than classical computers and as such could lead to breakthroughs in healthcare, financial strategies and security. Quantum computers are difficult to develop as there are many technical challenges in creating a system that preserves the quantum mechanical superposition of the qubits during computation and only collapsing them into a classical state for measuring. It is also difficult to scale up a quantum computer to use a greater number of qubits and much harder to initialize the values of qubits than classical bits. At the current time quantum computers have not reached the stage where they can perform computations that a classical computer cannot do if given enough time. When we reach the stage where quantum computers can compute things classical computers cannot, the age of quantum supremacy will truly begin.

Disagreements in quantum mechanics are another example of fact vs theory with specific emphasis on Bohr and Einstein’s theory. Bohr suggested that ‘it was meaningless to assign reality to the universe in the absence of observation’ and when not being measured quantum systems exist only as possibilities (PBS Space Time, 2016); this is known as the Copenhagen interpretation. Comparatively, Einstein’s theory proposed that ‘quantum systems had a reality independent to our observation of the systems’ which was known as an objective reality (PBS Space Time, 2016). Independently, both theories make sense regarding quantum mechanics. But the two theories show obvious contradictions prompting the question: can either theory be correct? In an attempt to disprove the Copenhagen interpretation, Einstein, with two other physicists, created a quantum scenario titled the ‘Description of Physical Reality’. This scenario suggested that in order for Bohr’s theory to be correct and realism to be abandoned, locality, the concept that each part of the universe only acts on its immediate surroundings, would also have to be abandoned. This would disprove Bohr’s theory as abandoning locality creates the paradox known as quantum entanglement. Bohr’s theory would suggest that if one particle in an entangled pair was measured, the measurements of the other particle would be changed as the system would collapse no matter the distance between the particles.

Is our Understanding of science based more on Facts or Theories? Imogen Laurence, 12OF Much of what we are taught in physics is theoretical and has the possibility of being incorrect, yet is often taught as if it is absolute fact. In this essay I will discuss how much of physics we actually know versus how much is theoretical and illustrate our dependence on theories to develop our understanding of the subject. I will look specifically at Newton’s Law of Gravity, quantum mechanics, the big bang theory and SUVAT equations to help answer this question.

This abandonment of locality seemed preposterous to Einstein. In 1964 an experiment was proposed by physicist John Stuart Bell using entangled electron and positron pairs created from a photon measuring the direction of

One example of a theory, or law, that is taught in schools is Newton’s law of universal gravity (despite being 12


the spin 7 of one particle then the other. Since they are in an entangled pair, the spins will be opposite and the effect of the observation of the first particle's spin on the second particle would resolve the debate between Bohr and Einstein. In the 1980s Bell’s experiment, carried out by French physicist Alain Aspect, observed the entangled polarisation of photon pairs instead of spins of an electron and positron. The results showed there was a correlation between the polarisation, and therefore spins, of particles in an entangled pair, meaning Einstein’s theory wasn’t correct. Therefore, by implication, Bohr’s theory was correct, however, it is still a theory and abandons widely accepted concepts like reality and locality, so may nonetheless be incorrect. Since quantum mechanics lies at the root of most modern physics and the theories we use can be disproven and contradicted, the majority of our understanding of physics could be incorrect, yet it is taught as fact.

the universe into microwaves (longer wavelengths and lower energy). Furthermore, before the discovery of CMB radiation there was another popular theory for the origin of the universe: The Steady State theory. This theory suggests matter is created continuously throughout the universe, making the universe infinite (Tate, 2014). Until the discovery of CMB offered support to the Big Bang theory (and therefore provided evidence to disprove the Steady State theory), the Steady State theory could have been correct, bringing up the point that a popular and seemingly sound theory may nonetheless be invalid. Whilst multiple pieces of evidence suggest that the Big Bang did occur making it increasingly likely that it was the origin of the universe, it is yet unproven. As long as it remains a theory, despite being incorrect (as Steady State theory seemingly is) it is being taught in schools as fact and widely accepted as the start of the universe. On the other hand, not all topics in physics are theoretical or created from them and some, like relationships and equations, are derived through experimental data. A common and simple example of this is the relationship between time (t), velocity (final velocity=v, initial velocity=u), distance (s) and acceleration (a) which are commonly known as the SUVAT equations.

Some concepts in physics are widely agreed upon within the scientific community but are unlikely to ever be proven. An example of this is the Big Bang theory which is largely agreed to be the theory behind the origin of the universe. However, the big bang theory is still only a theory as there is no existing proof that it happened; evidence only suggests so. Cosmic microwave background radiation (CMBR) and redshift/Hubble-Lemaitre law are both proven concepts that act as evidence for the big bang theory yet neither can prove it happened 8. Cosmological redshift, which suggests that galaxies are drifting apart and drift faster the farther they are from earth, was first discovered in 1929 and supports the Big Bang theory; this movement of galaxies can be explained as being a result of the explosion of the Big Bang and conservation of momentum. Later, in 1964, microwave radiation was observed throughout the observable universe. This supports the Big Bang theory as the radiation can be seen as a result from gamma radiation emitted by the explosion, which has shorter wavelengths and higher energy, being ‘stretched’ by the expansion of

The idea there is a relation between these measurements was first discovered and used by ancient Greek philosophers and mathematicians but wasn’t developed into the modern equations until the sixteenth and seventeenth century (although recorded experiments have been carried out since the 14th century) (Wikipedia Contributors, 2021). The equations can be proven using simple experiments. These have been carried out in order to teach and observe the relationships for decades, possibly centuries, and were carried out when our understanding of physics was much smaller than our current understanding, yet all valid experiments will give the same results. Even in classroom conditions, where control variables are hard to keep constant, the relationship can still be proven, showing how the SUVAT equations and some concepts in physics are demonstrably true. In conclusion, the study and development of physics largely depends on theories as so little of the subject has been or can be undoubtedly proven. Whilst this does suggest the subject might be unreliable, our current understanding gives a fundamental understanding of the universe and how it works, even if what is taught is largely theoretical. On top of this, since much of physics is theory, 13


it is crucial these be taught to provide possible answers to big questions (like how the universe was created) and teach people how to develop theories. Moreover, it teaches us that it is ok to guess, to make mistakes, as without this process our understanding of the universe would be much smaller. Whilst it might seem counterproductive, teaching theories that might not be true is crucial to our understanding of physics.

and completely escapes the Earth, reaching a constant speed the further away it gets from Earth's gravity and drifting away forever. This symbolises a universe in which the gravity of all matter is smaller than the initial force (the Big Bang) so it keeps expanding forever as the amount of matter in the universe is only large enough to slightly slow it down so that it decelerates which is known as an open universe; parallel light beams diverge over time. These three shapes of the cosmos have different fates and, by calculating the amount of matter in the universe and measuring the expansion speed, we can estimate whether the force of gravity is large enough to reverse the expansion. Big Crunch: If the results of our observations point towards a closed universe, then we would be fated towards a Big Crunch. Despite how immediate this sounds; the first stages of the Big Crunch would not be noticeable. Initially, the distant objects in the universe would still seem as though they are receding due to the speed of light not allowing us to see what is happening everywhere in the universe simultaneously. However, following this the closest galaxies to us would appear momentarily stationary and then start accelerating towards us causing a domino effect on the surrounding objects; galaxies further away from us will begin to be affected causing the radius of this circle to grow. While this is occurring galaxies will start to collide more frequently producing new stars, galaxies and planets as well as by-products such as intergalactic gas and radiation. Simultaneously, the afterglow of the Big Bang, known as Cosmic Microwave Background Radiation (CMBR) 9, will become more concentrated as the contracting universe increases the intensity of energy which in turn increases the level of radiation; the Big Crunch is essentially the reversal of the Big Bang. Furthermore, the radiation produced by stars and black holes since they were formed will suddenly condense and blueshift 10. The Big Crunch will cause intense energy levels that will destroy the surfaces of stars before they are even close to colliding. Once all matter has been reduced to its component particles it is unsure what will happen at such high temperatures and densities, but it will be unpleasant.

How will the Universe End? Mia Mutaditch, 12OF Introduction to the cosmos: One of the approaches cosmologists took to find the way the universe could end was by looking at the shape of it. When astronomers explain the different shapes of the cosmos, they often use the analogy of throwing a ball in the air (where air resistance is negligible) to model the relationship between the initial force. The ball represents the Big Bang, and gravity in the analogy, represents the gravity of all the matter in the universe. In the first scenario we have a ball thrown in the air at an ordinary speed where an initial force is exerted on the ball and gravity acts in the opposite direction to slow it to a stop and bring it back down again. This symbolises a universe in which the force of gravity from all matter is bigger than the force of the Big Bang, causing the universe to eventually stop and re-collapse while modelling a closed universe; parallel light beams would converge over time. The second scenario is where a ball is thrown up in the air at about 11.2 km/s (to reach the escape velocity of Earth) and leaves entirely to orbit Earth so it will come to rest infinitely far in the future; as it is always slowing. This symbolises a universe in which the gravity of the matter in the universe and the initial force (the Big Bang) are equal resulting in a flat universe; light beams would remain parallel over time. The third and final scenario is where the ball is thrown faster than 11.2 km/s

9 Cosmic Microwave Background Radiation (CMBR) is the remainder of the electromagnetic radiation from the Big Bang that fills all space and currently has a wavelength of microwaves which is harmless. However, the Big Crunch could increase energy levels again so a wave with a higher, more dangerous frequency like gamma will be present 10 Shift of the electromagnetic spectrum towards a higher energy level - moves towards the blue side of the visible light spectrum as it has a shorter wavelength and higher energy level. This is related to something called the Doppler effect where the observed waves in front of a moving object are condensed, causing a higher frequency and lower wavelength and increases the energy level, hence the shift to higher energy blue light.

14


As awful as this sounds astronomers still need to find out whether or not this is a possibility. In regard to calculating the amount of matter in the universe astronomers have come quite close after taking into account not only the visible but also ‘invisible’ matter which is known as dark matter. Astronomers do have an estimate, however, with the amount of uncertainty it produces it is difficult to establish whether this estimate is above or below the density to determine if the universe will collapse. Moreover, regarding expansion speed, in the late 1960s the speed of expansion was measured by surveying a large number of galaxies and calculating both their physical distance and their speed; astronomers produced a result that concluded the universe is fated to collapse. However, this result was inaccurately produced using images from photographic plates so in the 1990s astronomers recalculated the speed of the universe by applying the ‘standard candle’ 11 method to supernovas and in the end, it indicated that the universe probably will not result in a Big Crunch, but in something worse.

to virtual particles on their horizon until a final explosion and disappearance at the end. After this all particles of matter will start to decay as they are all unstable at some level, even protons will start to decay after 1033 years. By this point the universe is an exponentially expanding empty space with only a cosmological constant in it which is known as de Sitter space. This means it has become a maximum entropy universe as it only consists of radiation which is the product of decomposed matter with no restrictions on the flow of energy. From this point on, there is no way for the universe’s total entropy to increase and by interpreting the second law of thermodynamics (an isolated system’s total entropy can only increase as time goes on) the arrow of time is essentially gone as nothing can happen. As a result, nothing can exist as energy gradients (the moving of energy from one place to another) are the basis of life and they are no longer possible. However, another 101000 years will pass until this may happen so we have time to consider more abrupt endings like the Big Rip.

Heat Death: In 1998, when astronomers recalculated the speed of the universe, they came to the shocking conclusion that the universe is in fact accelerating; this makes no sense as this does not occur in any shapes hypothesised as they show the universe either decelerates or reaches a constant speed. Consequently, this discovery introduced the concept of dark energy which is the umbrella term for any hypothesized phenomenon that could make the universe accelerate. There are many theories about the form that dark energy takes in our universe and they lead to a different ending but the most likely theory based on observations is a cosmological constant. A cosmological constant is a property of spacetime that has a constant density. This means that the amount of dark energy increases as empty space expands to conserve the density. It appeared about 5 million years ago when the matter in the universe spread out due to cosmic expansion, it had ‘enough space’ to start working. This means the more space is available, the faster our expansion which creates an inescapable viscous cycle.

Big Rip: Although Heat Death is the most likely end to our universe, there is a possibility the universe could end sooner: dark matter is the deciding factor. In the last scenario we assumed that dark energy driving the expansion of the universe was a cosmological constant, however, like most of physics this is just a theory. In order to find out if dark energy behaves differently, scientists must calculate the equation of state of dark energy, also known as w. Astronomers know that dark energy has negative pressure and from a gravitational perspective pressure pulls (general relativity states pressure is another kind of energy like mass or radiation) so it warps space time to make it gravitationally attractive. But, as dark energy has negative pressure it can cancel out mass and because a cosmological constant has a constant density, which it maintains by increasing to fill empty space. If the presence of dark energy was a cosmological constant then it would be seen as the negative of density. Then, by using the equation of state parameter astronomers could find the ratio of pressure and density (in units in which comparison would make sense) to indicate whether dark energy is a cosmological constant as the value of w would have to equal -1. After this value was established, astronomers started calculating the size of w and came to the conclusion that it was in fact -1. However, in the early 2000s, astronomers realised they had made a false assumption.

The first stage of Heat Death is where more and more distant galaxies are dragged out of the Hubble radius (where the expansion speed is faster than the speed of light) so they disappear beyond our horizon. Behind this horizon, galaxies and other objects collide and merge to create clumps of stars that quickly die out. Black holes continue to grow for a period of time by engulfing any remnants that pass by them.

When plotting graphs people assumed that w could not have a value lower than -1 so physicist Robert Caldwell decided to explore this area and eventually created a theory for ‘phantom dark energy’: a form of dark energy with a value below -1 where its density is increasing over time. Caldwell and his colleagues set out to find w. However, it cannot be directly measured so astronomers compared the past expansion rate of the universe with models of different kinds of dark energy. This was successful and Caldwell concluded if w is infinitesimally lower than -1, then dark energy will tear the universe apart

Following this, new stars will also die out as nothing is there to fuel them so remaining stars will also burn out by either exploding as supernovae or cooling down over billions of years. Once all stars have faded and only darkness remains black holes will start to evaporate due 11 the comparison of a known luminosity (radiated electromagnetic power- light) with an object of observed luminosity and using this estimate along with the inverse-square law (when a physical quantity is inversely proportional to the square of the distance of the source from said physical quantity) to calculate the distance of an object in space

15


in a definable time. Unlike a cosmological constant which cannot break apart an existing coherent structure, phantom dark energy can and will in its first stage. By the time the speed of light has allowed us to see its effects, phantom dark energy will have separated giant clusters of galaxies and will continue doing so for the next 60 million years. Following this, we would notice stars on the edges of the Milky Way starting to drift away as they fail to round their expected orbits; this is the first sign our galaxy is evaporating and this process will continue for the next 140 million years. After this, the solar system will start to unbind as we notice planets breaking away from their orbits and drifting away from the Sun. After a few months our own planet eventually follows leaving us in complete darkness. An hour later, our atmosphere will begin to thin out and we would explode as our planet can no longer hold together. Subsequently, 10-19 seconds before the Big Rip the electromagnetic forces that hold together atoms and molecules can no longer withstand the ever-expanding space within all matter, causing them to crack open. Then the final stage occurs where the nuclei of these atoms break apart and the impossibly dense cores of black holes decay away before the fabric of space is ripped apart. Despite the grisly ending, unlike with Heat Death we can calculate the Big Rip using w using the value so -1 to determine how far into the future the Big Rip is; the closer to -1, the further into the future it is pushed. Fortunately, last time w was calculated it was predicted the earliest it could happen would be in 200 billion years, so we don’t need to worry just yet. However, there is another plausible scenario with the latest results from the most precise fundamental physics experiments ever performed, that could happen at any moment.

the bottom of the Higgs field potential, there could be another vacuum state at a lower part of the potential making our current vacuum only metastable. As seen in the diagram to the right, there could be another vacuum state at some even lower part of the potential called the true vacuum which could transition there via a high-energy event (fluctuations) or quantum tunnelling. A transition to the true vacuum would create a bubble: an infinitesimally small speck that has completely different physics and is the more stable state which the universe prefers. This would cause the bubble to quickly engulf the space surrounding its wall of extremely high energy. The unfortunate part, apart from the fact that we can't do anything to stop this almighty bubble, is that it moves at the speed of light so no one would be able to see it. If it approached you from underneath there would be a few nanoseconds during which your brain will think it is looking at your feet when they no longer exist; you wouldn't have noticed as the true vacuum is so large it's not even worth worrying about it. The other alternative is quantum tunnelling. Quantum mechanics states that you never really know where a particle is or, more relevantly, what path it's taking as it travels. This means when the math is done you have to calculate all paths it takes and everything about those paths. However, the data shows particles can possibly go through impassable barriers in the intervening time. This can also occur with fields like the Higgs field where the true vacuum is separated by ours with a potential barrier it can tunnel through. Luckily, the probability for a tunnelling event can be calculated based on the physical characteristics of the system. The chance of this event occurring is quite low, estimating its arrival in about 10100 years, at which point we would already be well into Heat Death or obliterated by a Big Rip. Our last hope is tiny black holes which could be nucleation sites for bubbles of true vacuum, however, they must be very small. This seems extremely unlikely as only stars bigger than 8 times the Sun can form black holes and the only way they can lose mass is by Hawking evaporation; a process that takes millions of years and even a black hole the size of the sun would take 1064 years to reach this size. Some cosmologists believe the extreme densities of the Hot Big Bang could have formed them, but the existence of life makes it implausible. Even if the LHC made black holes, which could only happen with extra dimensions of space, there is a threshold of mass that these black holes need to meet and the ones we could create are safe. So, even though this sounds awful it is probably the least likely to happen as we can't yet confirm if our vacuum is metastable. However, the possibility of vacuum decay would be incompatible with the theory of cosmic inflation as the quantum fluctuations during inflation would have been sufficient to trigger vacuum decay. As this didn't happen, we either don’t understand the early universe, or vacuum decay was never possible at all but don’t you worry, the next and final scenario has us questioning even more of our theories.

Vacuum Decay: The Large Hadron Collider (LHD) is the most powerful particle collider in the world which has enabled physicists to challenge their theories and alter the laws of physics by (attempting to) closely replicate the conditions of the early world. Its high energy collisions have allowed us to refine the Standard Model of particle physics and more significantly discover the Higgs boson. The Higgs boson is a localised excitation of the Higgs field which pervades all of space and determines the mass of a particle through the strength of the field's reaction with said particle. This energy field appeared somewhere during the Quark Era of the Big Bang and started interacting with most particles (apart from photons or gluons) once the electroweak force was separated into the electromagnetic and weak force through spontaneous symmetry breaking. Currently, however, it is in a ‘vacuum state’ which makes it harmless and allows molecules and structures to form and carry out the chemical processes of life as the masses and charges of particles are perfectly set; if the Higgs field had some other value, we may not be able to exist. To understand vacuum decay, we must first understand the concept of a potential. A potential was created when electroweak symmetry breaking first occurred and represents the value of a field that can change, dictating the Higgs field. Although we think we are safely settled at 16


Bounce: If you thought vacuum decay was hard to visualise, this scenario may be the most abstract of all, featuring extra dimensions of space, the growth of our cosmic map and the attempt to finally grasp the concept of gravity. Gravity is very different to the other forces. It's weak, yes. But when you get together large enough masses for a galaxy or planet it seems strong yet we overcome gravity everyday by picking something up. One idea that solves this problem is that gravity is leaking into another dimension (also known as the ‘large extra dimensions’ scenario). In relativity there are four dimensions: north-south, east-west, up-down and time, but in this scenario, there are others we can't access so, to make it easier to conceptualise, we limit our universe to a 3D 'brane'. This brane is like a large sheet of thin paper that sits in the ‘bulk’ (extra space surrounding it) with gravity being the only thing existing in both. The theory suggests gravity produced by massive objects in our own space loses a bit of its strength because it leaks into the bulk, however, the extra dimensions are so small compared to our ordinary ones the leakage isn't noticeable unless it is measured at millimetre distances. By measuring the amount of gravity expected from equations and how much is in that millimetre you can identify how much leakage is happening, if any. Although this theory is highly unlikely, the concept of ‘branes’ has been developed and altered and significantly affects an interesting and well-known model of our universe called the ekpyrotic scenario. This is an alternate scenario to cosmic inflation: the theory that there was a period between the singularity and the quark era where the universe expanded and essentially zoomed in on a part of the universe that came into equilibrium. Some part of the universe must have come into equilibrium at some point for the universe to be so uniform as there hasn't been enough time for the farthest points of the universe to interact with each other due to the speed of light. However, the ekpyrotic scenario suggests that the early universe was ignited by the collision of two adjacent 3D branes, one of which later became our universe as they collided causing them to fill with hot dense plasma followed by an unimaginable intense inferno. After this the branes went their separate ways slowly drifting apart across the bulk, expanding and cooling down in their own ways and then after a while coming back and bouncing off of each other to retrigger the cycle. This is a cyclical scenario where the creation and destruction of the universe infinitely repeats itself and the detection of gravitational wave signals from the hidden brane could be strong evidence to prove, at the least, that extra dimensions exist. On the other hand, the detection of primordial gravitational waves (gravitational waves from the Big Bang) would prove cosmic inflation. Although is it possible the signals produced are too weak to compete with cosmic dust so we don't really know.

contraction before it bounces away from the hidden brane. Although the ekpyrotic scenario explains the uniformity of the universe, we don't know if there is a singularity at the beginning of each cycle. There are many versions of this theory and the most recent has little contraction so no singularity occurs; the contraction of the branes is instead driven by a scalar field relating to the Higgs field or possibly to what we think might have caused inflation. Another version for this theory is centred around the fact the universe started with a shockingly low entropy. It explains that just before the bounce, the early universe took the entropy from a tiny patch of the hidden brane and set that as its initial entropy for our universe. This version also interprets the end of our universe as we are obliterated before the universe gets too crowded and hot, causing another Big Crunch to start a new Big Bang. This version can also be supported by Roger Penros’ solution which suggests our Big Bang was born from the previous universe’s Heat Death as the low entropy could be used as the new universe’s starting point. However, there are many theories including ones mentioning the possibility of gravitational waves passing from one cycle to the next as the cosmos never gets truly small during the bounce. Even Neil Turok has a new and different theory suggesting our universe and a time reversed version of the cosmos meet at the Big Bang like two cones touching tip to tip. But who really knows, these are just theories! Conclusion: To conclude, we have almost definitely eliminated the Big Crunch, at least a cosmic inflation inflicting one, due to the accelerated expansion of the Universe proving that the shape of the cosmos is not closed. The Big Rip is only a possibility if dark energy is not in the form of a cosmological constant and is instead ‘phantom dark energy’. But it is currently unlikely aside from a small chance of it happening in millions of years. Although Vacuum decay is the best sounding end, it is more likely to happen than a Big Rip as we still don’t know whether the current potential of the Higgs field is a false vacuum or not. In addition, we don’t know if a trigger could occur as a lot of the quantum theories contradict cosmic inflation so it is the least likely to happen. A bounce ending is also quite vague as there are so many competing theories building on the ekpyrotic model, proving the original concept is already hard enough, let alone finding the exact theory to explain it all. This leaves Heat Death, so far, the most likely and reliable end to the universe, however, there is still so much time until this happens that hopefully we will have formed a better understanding of the cosmos and the laws of physics in the extreme conditions of the early universe.

Whether the ekpyrotic scenario is real is still a mystery, however, the big difference between this theory and cosmic inflation is where inflation has the accelerated expansion of the universe to solve a number of cosmological problems, the ekpyrotic model has the slow 17


used, either in resonant absorption or dissipative absorption. We are going to focus on resonant absorption and because the atom is made of charge, when a photon comes along it drives the atom’s motion due to its magnetic field. This causes the atom to vibrate at the same frequency as the photon. The vibration is accelerated charge and accelerated charges emit light. So, every electron is emitting photons but, as these cancel out, you are left with the reflected and transmitted light. Furthermore, when an electron is in the lowest possible state it is said to be in ground state and when the electrons aren’t in a ground state they are said to be in an excited state. So, when an electron absorbs energy from its surroundings (neighbouring atoms or photons) it jumps to a high state. An atom can only absorb a certain amount of energy so only certain wavelengths of light are absorbed. This is what gives an insulator its colour. If a photon with a certain frequency is absorbed by an electron and has the same vibrational frequency, the electron will absorb the energy of the photon and transform it into vibrational motion. This is then transformed into thermal energy.

What stops a Piece of Paper from Being a Mirror? Rahel Bahtemichael, 8S Most people know when you look into a mirror you see a reflection of yourself and if you look at different angles, you’ll see different images. This is because the surface of a mirror reflects light better than most objects which is different for a plain piece of paper. However, when looking at a piece of paper you will only see one thing; a sheet of white. This may be obvious, but have you ever thought about what stops a piece of paper, or anything in general, from being a mirror? A piece of paper reflects from 70% - 85% of light while a mirror can reflect from 85% - 99.9%. The worst mirror could be equivalent to the best piece of paper. So, what is the reasoning behind this? Well, the simplest explanation is there are two types of reflection, specular and diffuse. To understand this concept, you must remember the law of reflection is always in place; the angle of incidence is equal to the angle of reflection. Moving on, specular reflection happens in mirrors and other objects which are generally smooth even at microscopic levels. If you think of a group of light beams hitting the surface of a mirror, all the rays of light will reflect in the same direction. On the contrary, a microscopically rough object will reflect groups of light rays and will hit the surface and reflect in different directions. This is because the surface is imperfect, each point that the different rays hit are in different angles and positions. But is that the full explanation?

So, to recap, there are three main reasons that stop paper from being a mirror. Firstly, the surface of the material has to be microscopically smooth so that specular reflection occurs not diffuse. Secondly, the material has to be a good electrical conductor as they have low penetration depths. And thirdly, the light can’t match the energy gap of the atom because it will become heat not a reflection.

Well, light splits in two at any surface. There is the reflected light, which obviously has been reflected, there is then the transmitted/refracted light. There is always transmitted light but what separates say a mirror and a window is how far the light can travel through the material. This is called the penetration depth. The light doesn’t disappear at once, however, it drops off gradually. At each penetration depth the light is dropped off at a factor of e. The penetration depth of a silver mirror would be around 1.4 nanometres (14 atoms) whereas the penetration depth of glass is 156.5 meters which is a very big difference. Most electrical conductors have a low penetration depth meaning that light cannot travel far through them. There is another reason, otherwise all insulators would be transparent. Atoms give off their own light, but they need energy to do this. So, the incoming light is absorbed and 18


some, this is no surprise given their innate capacity for rapid procreation and the assessing of pluripotency is based on teratoma formation. But, considerable studies have noted they also accumulate mutations and chromosomal deviations in culture. Moreover, regarding adult stem cells, it is believed embryonic stem cells are more similar to cancer cells than to regenerative cells, posing a worrying hurdle (David A Prentice 2019). Stem cell treatment also poses difficulties within itself as there is a technical problem of graft-versus-host disease associated with allogeneic stem cell transplantation. There is the precaution that the body will reject embryonic stem cells or incorrect growth and contamination of adult stem cells grown in the lab will pass on disease and cause further problems.

An introduction to Stem Cells: How Stem Cells Sparked a Medical Revolution Selena Ali, 11G Stem cells are a revolutionary discovery for scientists and medics across the globe and have the potential to completely change our medical world. They are a type of cell that can proliferate then differentiate into a specific cell type. The 2 main types of stem cells are adult cells and embryonic cells.

However, science and technology are only at the very beginning of stem cell discovery and it is certain that in the future stem cell research will advance and transform the medical world. Researchers have already proposed many ways stem cells may be able to create new treatment in the future. For example, embryonic stem cell therapy will play a major role in change to regenerative medicine and tissue replacement for a number of issues such as blood and immune system related genetic diseases, cancers, and disorders; juvenile diabetes; Parkinson's; blindness and spinal cord injuries, Alzheimer’s, rheumatoid arthritis and heart transplants. Stem cells from the amniotic fluid are being investigated to create tissue - engineered implants for babies with defects with stem cells from the umbilical cord used to build muscle and combat muscular dystrophy. Embryonic stem cells are also helpful for the investigation of early human development, study of genetic disease and toxicology testing. A deeper understanding of how the body develops and repairs damaged cells may lead scientists to replicate the process and hence solve many issues by creating new organs, tissues and cells.

Embryonic stem cells are pluripotent stem cells derived from the undifferentiated inner mass cells of a human embryo. Embryonic stem cells are remarkable for their 2 properties: the ability to carry out mitosis and divide indefinitely and ability to differentiate into any of the 220 adult body cells. Adult stem cells (also referred to as somatic stem cells) are found in the body after development and are congenic (genetically identical apart from one fixed position on a chromosome where the genetic material differs). These can similarly differentiate into mature adult tissue. However, the list of adult tissues that have been so far discovered to contain stem cells is limited. This includes bone marrow, peripheral blood, brain, spinal cord, dental pulp, blood vessels, skeletal muscle, epithelia of the skin and digestive system, cornea, retina, liver, and pancreas. Currently, the only regular treatment stem cells provide is from haematopoietic stem cells. These form blood cells and are found in the bone marrow to be used for bone marrow transplants, leukaemia and to help cancer patients regenerate blood cells after undergoing various radiation, chemotherapy and Fanconi Anaemia.

In conclusion, despite being a new concept, stem cell research for both embryonic stem cells and somatic cells are at the forefront of the medical world and are certain to dramatically change treatment for a wide variety of human diseases.

The reason behind such practical limited usage of stem cells is that stem cell research and studies come along with many challenges. Firstly, it is nearly impossible, in an organism as complex as a human, to design an experiment that will allow adult stem cells to develop in the quantity needed for treatment (NIH stem cell information home page, 2001). Another issue is the ethical as embryonic stem cells pose controversy surrounding the destruction of an embryo. Many argue this research destroys opportunities for an embryo who cannot defend themselves yet most embryonic stem cell research is carried out on donated stem cells that would otherwise go to waste. One of the biggest concerns regarding clinical use of embryonic stem cells is their potential tumorigenicity. For 19


Studies have also suggested the correlation of having breast implants increasing the probability of a women committing suicide. Starting from ‘10 years after surgery, a woman is 4.5 times more likely to commit suicide and 6 times more likely 20 years post-surgery’ (mentalhelp.net). While it is not physically the materials of the breast implants, depression, body dysmorphia, low self-esteem, alcoholism and drug addiction among other factors can be held accountable for these figures, showing, in the most extreme of cases, the dire consequences breast implants can have on a woman.

The Problem with Breast Implants Annika Malhotra, 12MLB Breast implants are a type of plastic surgery involving the implant of a material, typically silicone or saline, into a woman’s breast in order to change their shape and (typically) increase their size. While this has been a procedure of great popularity, many cases have arisen where breast implants have had severe ramifications. This not only has a physical side but also a mental side, a field in which scientists and researchers have started to recognise and study due to its growing significance in our current society. It is important to note, while this essay’s aim is to outline the problems of breast implants, there are also many positives, and this should not be an indicator that it is bad and therefore should not be done.

Another problem with breast implants is the reasoning behind women getting the surgery. In our current day and age, society has certain expectations for women, specifically for the way in which they look. The thin waist, wide hips, slim thighs, large breasts and just everything to be ‘perfect’ – no cellulite or stretch marks, has created the predisposition that women have to always uphold a ‘perfect’ image. This is also furthered by celebrities and influencers showing their followers that this is ‘naturally’ what they look like, boosting their fan’s insecurities. These are huge factors behind the success of the plastic surgery market which can exploit vulnerable women into believing they need to alter their appearance in order to feel content with themselves. While this is not always the reason for plastic surgery, it is certainly one large factor. Breast implants are a common surgery, for example from October 2016 to June 2018, ‘20,095 patients have had at least one operation’, making it the most common cosmetic procedure performed on women (NHS, 2018).

Breast Implant Illness (BII) is a term used when describing a variety of problematic symptoms following breast augmentation. It may appear with all forms of breast implants: saline, silicone, smooth-surface, rough-surface or teardrop-shaped. The physical symptoms include, ‘joint and muscle pain, chronic fatigue, breathing problems, rashes, gastrointestinal problems and dry mouth and eyes.’(breastcancer.org). Women may experience these symptoms directly after their surgery or for up to years post-surgery. BII is not an official diagnosis due to its poor recognisability and lack of research meaning women who experience it are usually uninformed on what solutions there are. This typically results in removing the implants and capsule entirely which is called an ‘en bloc capsulectomy’. This again leads to more complications as there are for any surgery, leaving the patient in a vicious cycle of problems.

While breast implants can have many positive outcomes, such as improved self-esteem and greater confidence, the ramifications can be very harmful to women. The problems range from internalised psychological issues to ones perpetually ingrained into modern society, but what they all have in common is their ability to target women and take advantage of their vulnerable states.

Many women have pre-existing mental health problems which influence their decision of getting breast implants; this can include body dysmorphia and eating and mood disorders. Body dysmorphia (BDD) is a ‘body-image disorder characterized by persistent and intrusive preoccupations with an imagined or slight defect in one's appearance’. (adaa.org) It typically affects both female and male teenagers and adolescents; BDD affects 1 in 100 young people aged 5 to 19 in the UK alone (natcen.ac.uk). Many BDD sufferers often turn to plastic surgery in order to ‘fix’ their illusory ‘problems’, leading them down a rabbit hole of crippling body issues never being able to achieve a perfect or desired outcome. Surgery is often preferred over therapeutic involvement for BDD victims which can sometimes lead to surgery addiction and further mental health problems. 6-15% of all plastic surgery users have also been diagnosed with BDD (Panagiotis Ziglinas, Dirk Jan Menger and Christos Georgalas, 2013). For breast implants specifically, getting the surgery, repeatedly in some cases, can lead to the onslaught of symptoms of BII meaning the patient now not only has mental health but also physical problems. 20


Why is there a Mistrust of Medical Professionals and Healthcare in Black Communities? Maya Shah, 12MLB This essay came second in the GDST’s #700STEM Scientific Writing Challenge. which are still believed by many medical professionals. In

Black communities have suffered years of enslavement,

a 2016 Princeton University study of 222 White medical

racial segregation, and structural racism in every aspect of

students and residents, about 50% reported that at least

their lives whether in the criminal justice system, or at their

one of the fallacies was possibly, probably, or definitely

local GP. This history of persecution has led many Black

true. These included: nerve-endings are less sensitive in

people to feel as if they cannot trust state bodies to care

Black people than in White people; White people, on

for them: healthcare is no exception. There is a particular

average, have larger brains than Black people; Black skin

mistrust of healthcare, established through centuries of unequal

and

inadequate

medical

care

is thicker than White skin. These widespread beliefs are

through

the result of bigoted doctors, who, in the 1800s,

exploitation under the guise of medical research.

performed heinous, non-consensual experiments on Black Systemic racism permeates every societal institution,

slaves to substantiate the racist belief that the Black body

including healthcare, resulting in the sometimes-fatal

is intrinsically different from the White body. Their beliefs

mistreatment of Black patients and causing wariness of

were presented as fact and published medical journals

medical professionals. In 2021, Black women are still four

and, as the 2016 study showed, are still perceived as truth

times more likely to die in childbirth than White women in

by many. This racist pseudoscience has had serious,

the UK. Although socio-economic factors could explain

adverse consequences on the treatment that Black

this disparity, a report using data from the United States

people receive, as well as deepening Black communities’

Department of Health and Human Services concluded

distrust of healthcare.

that Black, middle-class women were more likely to die in childbirth

than

undermines

White,

explanations

working-class based

women.

Whilst structural racism is a significant cause of disparities

This

in healthcare, racist ideology and violations of Black

on socio-economic

people have formed the foundation of modern medical

inequalities.

and anatomical understanding. Intergenerational trauma, Much of the inadequate care Black people receive is due

a result of medical exploitation, has led to a fear of public

to centuries-old racist fallacies about the Black body

health services and officials. Throughout the 1800s, the 21


bodies of Black slaves were utilized as ‘anatomical

that have led to its establishment instead of regarding it

material’ in medical schools and their dissected bodies

as paranoia or a lack of education. Recently, this is

were exhibited without consent across America. The

reflected in research that showed that 72% of Black

lauded ‘father of gynaecology’, J. Marion Sims, performed

Britons are hesitant to get the Covid-19 vaccine, despite

extensive and painful operations on Black female slaves

the Black community being one of the hardest hits by the

without anaesthesia; he also believed the Black people

pandemic. Ultimately, until medical bodies accept the

were impervious to pain.

existence of structural racism in healthcare and act to

Even after the abolition of

slavery, Black people were dehumanised and reduced to

overcome

objects of medical research, as depicted by the notorious

continue to feel that the government does not have their

‘Tuskegee Study of Untreated Syphilis in the Negro Male’.

wellbeing and health in their best interests, perpetuating

The study, conducted by the US Public Health Service, ran

the cycle of mistrust.

from 1932 to 1972. It consisted of 600 African American men, 399 of whom had syphilis who were told they were being treated for ‘bad blood’, a term used to refer to a variety of ailments. To trace syphilis’ progression, the researchers provided no care, despite penicillin being the recommended treatment for syphilis in 1947; they watched as the men went blind, developed mental disorders or died due to their untreated syphilis. 128 participants died from syphilis or related complications. Taking into account this history of exploitation, Black communities’ lingering suspicions about healthcare and health officials are understandable. When discussing this mistrust of medical professionals, it is important to acknowledge the events and inequalities

22

these

disparities Black communities will


23


Contents How and why do animals experience similar mental illnesses to humans?

1

AI and the Future it Holds

4

Memory: An Introduction to Memory-Related Diseases and How to Prevent Them

6

Mars - our new Home?

7

Hypnosis: can we control someone’s mind?

9

How is Quantum Mechanics used in quantum Computing?

10

Is our Understanding of science based more on Facts or Theories?

12

How will the Universe End?

14

What stops a Piece of Paper from Being a Mirror?

18

An introduction to Stem Cells: How Stem Cells Sparked a Medical Revolution

19

The Problem with Breast Implants

20

Why is there a Mistrust of Medical Professionals and Healthcare in Black Communities?

21

24


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.