ISSUE 9
HILARY 2021
EMERGENCE
1
HAVE YOU THOUGHT ABOUT...
A CAREER AS A PATENT ATTORNEY?
An intellectually challenging and rewarding career option
What Does It Involve? Training as a Patent Attorney is a career path that will enable you to combine your understanding of science with legal expertise. You will leave the lab environment yet remain at the cutting edge of science and technology, applying your knowledge and skill in a commercial context. You will help to protect intellectual property assets and grow businesses.
Jenny Soderman MChem in Chemistry University of Oxford (2018)
Sound Interesting? J A Kemp is a leading firm of UK and European Patent and Trade Mark Attorneys with offices in London, Oxford, Cambridge, Paris and Munich. Deadline for our Autumn 2020 intake: 10 January 2020
Hiro Shimazaki MBiochem in Biochemistry University of Oxford (2018)
2
www.jakemp.com/careers
Editorial
T
he storm of Covid, doom, and gloom are passing by – and a ray of hope streams through the clouds of darkness. A rainbow appeareth in the sky, as a sign of hope – a kind of covenant that the flood and flurry of fear, social distancing, and masks will come to an end. Humanity, which has gone through the dark days before science, two world wars, and many an economic downturn, has been hit with yet another challenge. But if experience has taught us anything, we will undoubtedly emerge stronger, smarter, and more resilient than ever before: as Francis Bacon once said, “By far the best proof is experience”. As the ants surface from their heap, humans seem to be slowly emerging from lockdown – slowly congregating, gradually populating the streets once more, and seeing the light of day. Our economies are emerging from the deep abyss, stocks are at highs, and technologies that were overlooked, such as in AI and Biotech are dominating both our newsfeeds and fascinations alike. Far from the fears of mass extinction, an air of hope is in the spring air as the daffodils bloom and the bluebells chime to the tune of Easter. Science, after all, is the key to a harmonious society. As the Oxford University student science magazine, we were happy to report on the Oxford-AstraZeneca COVID-19 Vaccine, which was not only developed in record time, but also made to be affordable, effortlessly stored, and distributed across the globe. On page 18-20 we bring you interviews with Dr Sean Elias, a researcher from The Jenner Institute, the focal point of vaccine research in Oxford. In this extraordinary term, we are proud
to say, Annuncio vobis gaudium magnum! Habemus Vaccine! (I announce to you a great joy! We have a Vaccine!) This year has had an extraordinary form for everyone – not the least for the OxSci team. With articles ranging from across the disciplines of the sciences and the humanities – from philosophy to epidemiology – from the history of science to biology – from music to science & religion – it is our hope that this interdisciplinary interplay will be a beacon of light to the emergence of AI, Big Data, and imminent 4th-industrial revolutionary world we will soon inhabit. As Editors-In-Chief, we would like to thank the senior editorial team, contributors, artists, and subeditors for their stellar effort, working ardently through the times through socially-distanced Zoom call after Zoom call to put together this magazine. Our team with such a diversity – with Chemists, Psychologists, Theologians, Physicists, Biologists and all manner of different specialties is a good reflection of the future of science emerging in our midst. We’re sure you’ve all heard before that science is like a puppy, growing and emerging in our midst. And this is the future of science in which we can play a part. The Editors,
Sea Yun Joung
& Laura-Bianca Pasca
Cover Artwork by Creative Director, Anoop Dey
33
Contents
Editors-in-Chief Laura-Bianca Pasca & Sea Yun Joung Print Editor Laura-Bianca Pasca
5
Emergence of a New Field: Ai in Biotech Evan Turner
6
A Sex-Y Story: The Emergence, Evolution and End of the Y Chromosome Clarissa Pereira
Web Editor Sea Yun Joung Comment Editor Gideon Bernstein News Editor Tom Leslie Creative Director Anoop Dey Broadcast Director Kunal Patel Schools Coordinators Gavin Man & Natasha Harper Subeditors Anezka Macey-Dare Anna Lappin Connor Forsyth Dulcie Havers Emma Hedley Enrico Nardelli Katie Zhang Jia Jhing Sia Mason Wakley Susanna Pahl Yexuan Zhu
o 8
o10
Rainbows and Refraction: Emergent Hope and Colours Mason Wakley Searching our blood for Answers Toscanie Hulett
o
12 14 o
16
Extant Extinction: How do we emerge alive Harry Savage Communicating the Covid-19 vaccine Athanasios Patsalias Schools Essay Competition Winner Rohit Antonygnaneswaran
18
0
The Oxford AstraZeneca Vaccine: An Interview with Sean Elias The OxSci Editorial Team
21
The Key to a Harmonious Society Rosa Parker
22
Leave it to the Ants Jake Burton
24
From Reasoning to Revolution: The Emergence of the Scientific Metehod Helen Collins
0 0
0
0
26
The Constraints of Free Will Bianca Pasca
28
Transformation of Global Attitudes Emerging from the Covid-19 Pandemic Sarya Fidan
o
o
30
Emergence from Cholera: How Physics Saved London Anna Lappin
Artists Alex Kahn Andrea Vale Anoop Dey Clarissa Pereira Daffodil Dhayaa Emeric Claudiu Eve Dickie Juliet Shapiro Lauren Louise Keiller Sofia Thomas
4
OSPL Members Felix O’Mahony Stefan Kalpachev Joseff Howells Justin Lim Amelia Smith Alex Beukers Sruthi Palaniappan Toni Quadri
Artwork by Sofia Thomas
Emergence of a new field: AI in Biotech Evan Turner uncovers the endless prospects and possible dangers of an emergent field
A
rtificial intelligence (AI) is one of the most powerful technologies shaping modern society. Its use in biology has created a revolutionary and fast-growing research area. The fruits of this marriage have the potential to change medicine for the better, but also to drive global catastrophe and potentially the end of human civilisation. AlphaFold is an excellent example of how AI can drive breakthroughs in biotech. Developed by Google’s DeepMind, AlphaFold is an AI that can accurately predict how proteins fold. Proteins are complex molecules essential to life. Their 3D structure determines their function. We can easily read the DNA “blueprint” for a protein, but predicting the shape of the actual protein is a challenge that has stumped scientists for decades. AlphaFold can produce predictions with comparable accuracy to the gold standard experimental techniques. Many diseases are caused by misfolded proteins. By knowing exactly what a certain misfolded protein looks like, scientists can develop a drug to treat the disease it causes much quicker. Whilst traditionally this would take years to establish, AlphaFold can do it in a day. This has the potential to rapidly accelerate drug development. In fact, AlphaFold was able to accurately model the shape of several proteins associated with COVID-19 before traditional techniques could. Synthetic biology is an exciting, but potentially harmful, emerging discipline. Government Ministers have called it ‘one of the most promising areas in modern science’, and it occupies the minds of both scientists and ethicists. In synthetic biology, scientists rewire living organisms and program them with new functions. This, for example, could mean engineering bacteria that clean up oil spills or generate biofuels from waste plastic. AI is at work here as well. Arzeda is a company that uses AI to design and build proteins from scratch. Generally, they have been applied for relatively benign uses such as producing artificial sweeteners or pesticides. Scientists at Berkeley have developed an algorithm that assembles combinations of such proteins that build more complex biomolecules, like machines in a factory producing cars. Their first version was able to improve production by
43%, compared to the best human designs. The threat of synthetic biology is in its versatility. The technology truly has the potential to produce ‘virtually any chemical’, as Arzeda claims, from cheap cancer drugs to nerve agents. Bacteria that could be grown with kits similar to those for home-brew beer could be making sarin, a deadly neurotoxin. The price of equipment to do these things is dropping rapidly and the application of AI will only accelerate this. Correspondingly, synthetic biology has a growing community of DIY ‘biohackers’, who experiment with new forms of life in garden sheds and community biolabs. This grassroots citizen science will undoubtedly lead to start-ups and technologies that will improve our lives, but it also makes it easier for someone with bad intentions to access potentially dangerous technology. These issues are growing in recognition; at home, the Oxford Future of Humanity Institute examines existential risks like synthetic biology. Ultimately, regulation of the tools and people involved in biotech will need to be developed to safeguard the future and foster positive developments.
Artwork by Emeric Claudiu Ardelean
55
A Sex-Y Story : The emergence, evolution, and end of the Y chromosome
A
Clarissa Pereira considers the Hows, Whats, and Whys of Y
lmost every single aspect of our physiology is determined by our DNA—from the colour of our skin, to the way we think and even our chances of developing certain diseases. DNA influences the development of our bodies throughout life, and especially during the crucial first weeks of embryonic growth, when we undergo sex determination.
6
As any basic biology textbook will tell you, mammalian DNA is organised into several long strands known as chromosomes. Humans have 23 pairs of chromosomes; a set of 22 identical twins (autosomes) and one fraternal twin pair that look almost nothing alike (our sex chromosomes). If you imagine every letter of your DNA typed out into a series of books, each gene would comprise a chapter and each chromosome would be a separate volume. This view of our genetic code doesn't need to just be trapped in your imagination either; if you're ever in London, I'd highly recommend visiting the Wellcome Collection and gazing in astonishment at the Library of the Human Genome —an expansive bookcase filled with these letters of life. What a textbook might neglect to mention, however, is that our chromosomes are far from organised collections of information. Geneticists can only dream of genes being neat-
ly grouped based on function. Instead of one chromosome controlling our eye development and another determining our brain biology, genes are littered haphazardly throughout each chromosome—messy manuscripts that take decades to decipher. The only striking exception is the Y chromosome with its streamlined profile of genes, almost all of which are responsible for maleness. We now know that the key player of male sex determination is a small sequence of DNA on the Y chromosome: the SRY gene.
The Gender Chromosome Gap Clearly, the Y chromosome plays a dominant role in sex determination. Yet, it seems to pale in comparison
to its X counterpart in terms of its structure. It has fewer functional genes, a shorter stature and many more mutations. The journey to understanding its uniquely poor structure starts with tracing back to its emergence in our genome. Prior to the evolution of mammals, sex determination did not follow a chromosomal blueprint. Rather it was decided by environmental factors. For example, alligators’ sex entirely depends on incubation and egg temperature at the time of hatching. As a result, every one of their chromosomes belongs to a matching set as they lack distinctive sex chromosomes. At some point, very early in the evolution of mammals, an ancient form of the SOX3 gene, involved in brain development pathways, randomly mutated to form the SRY on a single autosome. This was the first step in a long path of mutations, a growing divide between the
SRY-containing autosome and its counterpart—culminating in two very different sex chromosomes. From comparing our genomes to those of early mammals, such as marsupials, geneticists have identified that the modern Y chromosome shares just four genes with its much larger autosomal ancestor. How did we get from an ancestral Y with 2000 functional genes to a stunted chromosome with just 55?
Mate or Mutate
The answer lies in the surprising fact that the sex chromosome itself is ‘asexual’ — it doesn’t recombine with the X chromosome during cell division that produces sperm cells. During sex cell production, each autosome ‘crosses-over’ with their twin, swapping segments of DNA between themselves to create unique, new chromosomes for future offspring. This chromosomal mating, termed recombination, also serves a vital role in eliminating mutations in the autosomes of sex cells—preventing mutations from being passed on to future generations.
Recombination works well in autosomes and in XX chromosome pairs as they each have an identical twin. However, since the development of the SRY gene, X and Y chromosomes have been unable to fully recombine during sperm cell production as they aren’t identical. While X
chromosomes can recombine with each other over their full length, all that modern X and Y chromosomes have left are little regions at their tips that are capable of crossing over. Without recombination, our Y has gradually acquired a slew of deleterious mutations—filling the chromosome with ‘junk’ DNA and shortening it through deletion mutations. As the Y chromosome mutates, it becomes less able to recombine with its X counterpart and continues to lose genes: a vicious cycle of shrinkage. Trapped in this cycle of mutating without much recombination, is the Y chromosome headed for extinction? Again, we can turn to our marsupial ancestors for answers. Dasyurid marsupials, such as the Tasmanian devil, have a Y that has been whittled down to just 10 megabases in length—a little genetic fragment containing just SRY and a few sperm-producing genes. This degradation is taken to the extreme in bandicoots. These marsupials use their miniscule Y chromosome (holding nothing but SRY, the last in a long line of seemingly disposable genes) for sex-differentiation and gonad formation,before eliminating it entirely from all other embryonic cells. Such a small chromosome, essentially a single gene, is at great risk of further loss.
species have ventured beyond this point in chromosomal evolution: the ryukyu spiny rat and the transcaucasian mole vole. Their Y chromosome has diminished to the point where it has been taken up by its X counterpart through the process of translocation. Now instead of XX females and XY males, the sex-determination system is XX females and XX’ males (where X’ is an identical X chromosome with an addition of the SRY gene). The X’ will not be able to entirely recombine with the X and gradually it will accumulate mutations—diverging in function and shortening to produce a new Y chromosome. In this way, the same Y chromosome evolution described earlier can renew itself cyclically in a sex chromosome cycle.
Fast-forwarding a couple hundred million years, human sex-determination may morph entirely, doing away with the Y chromosome or forming an entirely new one. One thing is clear, however: But fear not—evolu- the SRY gene, and males, are tion is not planning to dispense here to stay. with the SRY entirely! Only two Artwork by Clarissa Pereira
77
Rainbows
andRefraction: Emergent Hope and Colours
Mason Wakley reflects on the origins of rainbows
I
n the first lockdown, many windows were beautifully decorated with rainbows of all shapes and sizes. They became synonymous with hope and community in a time that often felt bleak and never-ending. Whilst there were not many rainbows stuck to windows during the winter lockdown, there was still hope of seeing a rainbow in person with the low-lying sun and rain showers that many of us experience in the UK during this season. But how do rainbows form?
8
Sunlight is scattered through the process of refraction as it passes through water (refraction being the change in the speed of a ray of light as it passes from one substance to another). Despite the common belief that water droplets are shaped like tear drops, water droplets are spherical in nature, as surface tension pulls them into this shape. As the light enters the water droplet, the range of wavelengths that make up the white light from the sun will refract with varying angles. At the back surface of the water droplet, the wavelengths of light are internally reflected back into the water droplet, before being refracted once again as the light leaves the water droplet. Due to the difference in emergent angle for different wavelengths, we see these rays as a vivid spectacle of ordered colour from red to violet.
The rays are detected when the observer is looking at the sky at an angle of approximately 40° to the ground and only a limited collection of water droplets are able to achieve this angle. These specific water droplets lie on the surface of a cone with the observer at the tip, leading to the familiar arc shape of the rainbow. This also explains why a full circle rainbow can occasionally be viewed – but only if the observer themselves is in the sky (i.e. on an aeroplane), as the ground is no longer in the way. LIGH TF THE ROM SUN
REFRACTION
INTERNAL REFLECTION
REFRACTION
Incident ray
θ1 Glass block
θ2 Refracted ray
In magical, yet quite common, circumstances, a double rainbow can also be formed. These are formed if the light is reflected twice within the water droplet, causing the order of the colours to be reversed. As this process is less likely , the double rainbow will be spread over a greater area, making it fainter.
Why does light slow down in a denser medium? Light exhibits a wave-particle dual nature, meaning it can be considered both as a wave and a particle, depending on the situation. Viewing light as a wave allows for a better explanation of refraction. Light waves are transverse waves, meaning the direction of vibration is perpendicular to its direction of travel; a more developed view of light waves is a changing electric field with a perpendicular changing magnetic field. Due to these features, visible light is part of the Electromagnetic (EM) spectrum.
As light enters the new medium, the oscillating electric field will interact with the electrons in the atoms, causing these electrons to move as they experience a force, in turn generating a second oscillating electric field. The electric field of the light wave and the electric field generated by the electrons will add together, generating an electric field that is slower than the initial speed of light. This order of events occurs when light enters a denser medium. Due to the fact that new wave is slower in the medium than before and as a consequence of Fermat’s least action principle (which states that the path of the light must minimize the time taken), the light ray will bend towards the normal( the imaginary line perpendicular to the surface of the water droplet). Upon emergence from the medium, the wave will now only be composed of the original light wave, meaning that it will travel at its original speed. This is because the interaction with electrons is not as strong as before due to the decreased density of air and thus causes the light to bend away from the normal. A rainbow is always a sight to behold, whether it’s in the sky, at the bottom of a waterfall or within the windows you passed on your daily walks in lockdown. And now, the next time you see one, it will not only be a magical experience but one that you can explain and hopefully appreciate with science. Artwork by Emeric Claudiu Ardelean
99
SEARCHING OUR BLOOD FOR ANSWERS
Toscanie Hulett looks at the future of liquid biopsies
T
he treatment was destined for failure. When Ryan Corcoran, a leading oncologist in Boston began the treatment of a patient with colorectal cancer in 2014, he had forgotten one thing- ‘If you don’t put the needle in just the right place, you may miss some important tumour cells’. The treatment had a fatal flaw: Corcoran and his team had based the treatment on the genetic analysis of only a portion of the tumour. The treatment seemed to perform well at first but, as with all hamartias, it was doomed for tragedy and soon the tumour became resistant. The source of resistance was only identified when the team used a liquid biopsy to test for stray tumour DNA in the blood. Here, they discovered that the resistance to the treatment was in fact due to a mutation present in the blood, not in the tested portion of the tumour. This success leads us to ponder—could liquid biopsies be the next best tool in an oncologist’s arsenal? A liquid biopsy is used to determine the status of a disease by testing non-invasive liquid samples, such as blood, urine and saliva. These liquid samples contain biomarkers, including circulating tumour cells (CTCs) and cell-free DNA (cfDNA), originating from tissues all over the body, notably cancer tumours. Rapid advancements in genetic technologies allow us to detect and sequence these biomarkers to obtain valuable disease information.
10
CTCs as biomarkers The first type of biomarkers are CTCs, which are tumour cells in the blood that have broken off from a tumour. CTCs are easily recognised as they are epithelial, meaning they are found on the surfaces of many organs in the body. In 2013, a diagnostic test known as CellSearch was developed. This uses beads coated with antibodies that bind to known epithelial proteins to capture CTCs. The prognostic value of CTCs has become increasingly apparent as their levels appear to be correlated with reduced overall survival of patients. Thus, it is no longer necessary to find and analyse the cancer tumours themselves. An advantage of CTCs to current tests is that this method creates the possibility of intercepting cancer in asymptomatic people with routine blood tests. In addition, it may allow us to capture signatures of surviving cancer cells, known as minimal residual disease, after radiation treatment.
These cells are generally hard for physicians to distinguish amongst the scarring caused by radiation. Therefore, CTC detection could eliminate potential human errors in residual tumour detection.
diagnostics. During foetal development, some cells form [...] this method creates the possibility of intercepting the foetus while others form cancer in asymptomatic people with routine blood tests. the placenta. A few of the foetal cells from the placenta enter However, the current problem faced by clinithe mother’s bloodstream until eventually foetal cians is the limited sensitivity in liquid biopsies DNA increases to approximately 3–13% of the to account for the scant levels of CTCs in the mother’s total cfDNA. This means that within blood. Especially in early-stage diseases, there the mother’s bloodstream resides some of the can be as few as one CTC per 1 X 109 blood cells foetal DNA—the cfDNA. in patients with tumours that shed cells. Despite Testing this cfDNA has allowed us to observe these obstacles, research groups are making clear genetic abnormalities, such as abnormal chrostrides towards better detection of CTCs as a few mosome numbers known as aneuploidies, clinical trials are showing encouraging results. and has revolutionised antenatal care. A clear advantage is that NIPT carries no risk to the pregnancy whereas amniocentesis, the current commonly used method of testing, is invasive and, after being used, it causes miscarriages in 1 Liquid biopsies also make use of the presence of in 100 pregnancies according to NHS statistics. cfDNA as biomarkers. cfDNA is found in liquid samples after body cells change shape or die. Thanks to advances in genetic sequencing technologies, we can profile mutations in the cfDNA of tumours, which appear to derive the genetic diversity of the tumour. As aforementioned, this can be a game-changer when considering which drugs to use in paAccording to RNCOS market research, the global tient-specific scenarios. Strikingly, a more recent liquid biopsy market is expected to exceed $5 study led by Corcoran has shown that in 78% of billion by 2023. It is evident that the field has the people tested, circulating tumour DNA reunbounded potential and that more interventionvealed mutations associated with drug resistance al clinical trials will surely ensue to cement the that were absent from tissue biopsies. widespread use of liquid biopsies. Applications are being pushed far beyond oncology and we However, major setbacks of using cfDNA as may find ourselves able to completely rewire our biomarkers are the difficulties in distinguishunderstanding of diagnostics by uncovering the ing mutations in cancer-derived DNA from the lethal malignancies in our bloodstream. randomly mutated DNA of non-cancerous blood
“
cfDNA as biomarkers
Future prospects of liquid biopsies
cells. These obstacles can surely be overcome as genetic technologies continually become more sophisticated.
Artwork by Lauren Keiller
Further applications of liquid biopsies The mechanism of testing cfDNA is already
being applied clinically in other medical fields beyond oncology to great success. Those familiar with the term “liquid biopsy” might also have heard of non-invasive prenatal testing (NIPT)—a common clinical procedure for pregnancy
11
E
xtant Extinction: How Do We Emerge Alive?
Harry Savage
explores mass extinction through the ages
T
hroughout the Earth’s 4.5-billion-year history, a huge array of forms of life has existed, one following the next. Natural history tends to repeat itself, and there is nowhere clearer to see this than the Earth’s five mass extinction events. While different initial causes have been proposed for each mass extinction, common environmental themes have been identified. Can we use natural history to our advantage and prevent humans suffering the same fate as the dinosaurs?
local ecological gaps left by the extinction made way for the widespread diversification of fish and the surviving molluscs.
When it all started to end The first major mass extinction event (known as the End Ordovician) occurred 450 million years ago. It has been proposed that this event occurred as the result of a glaciation caused by a drop in atmospheric CO2. The reasons for the glaciation are not completely certain, but the most prevalent idea is that it was due to mineral run-off, either from acid rain dissolving the newly formed mountain ranges, or because of primitive plants breaking down their rocky habitats. Run-off often leads to algal blooms and eutrophication, and consequently the entrapment of marine CO2. The glaciation lowered sea levels and thus reefs were destroyed. A later increase in temperatures resolved sea levels but caused a deficit in marine oxygen, and the harsh environmental conditions throughout the entire 1.5-million-year event ultimately wiped out 85% of marine species .
12
While many species became extinct , seabed animals were less affected. Indeed, dominant groups survived through this period, such as brachiopods and trilobites. A common trend throughout mass extinctions is that local species are less likely to survive than those found globally, and this was also certainly observed here. While the Silurian seas that followed were quite similar to the Ordovician, the
‘The Great Dying’ The largest mass extinction in Earth’s history was the End-Permian extinction event and occurred about 250 million years ago. It is commonly known as “The Great Dying”, due to the huge loss of 90% of all species. It has been suggested that temperatures rose significantly due to increased CO2 levels, amplifying the greenhouse effect, ultimately leading to ocean acidification, reef loss, and marine anoxia, i.e, depletion of dissolved oxygen in the Earth’s waters.. The causes of this atmospheric change are subject to debate, but the widely accepted theory is that the 100,000 gigatonnes of CO2 emitted by volcanic activity in Siberia was enough to raise global temperatures and cause mass extinctions.
Although skies would have been covered by volcanic ash, plants seem to have been largely unaffected, with most extinctions occurring instead in marine and terrestrial animals. While disaster taxa (organisms thriving in harsh conditions) such as bacteria and bivalve molluscs prevailed shortly after the temperature increase, synapsids (animals between reptiles and mammals) profited most from the post-extinction world. The End-Permian event also opened doors for Earth’s next dominant animals – the giant reptiles
Extinction repeat or Extinction Rebellion?
Cretaceous Park
Many say that we are currently living through a mass extinction event, and there is plenty of evidence to support this. The species extinction rate is up to 1000 times higher than the background rate, not even including species that we do not monitor closely, and some scientists estimate that a third of known species could be extinct in the next 200 years. It is obvious that humans are large contributors to the extinction problems, due to worldwide overhunting, habitat destruction and species introduction.
Perhaps the most famous of the mass extinctions happened 66 million years ago at the end of the Cretaceous period. This wiped out an estimated 75% of all species, most notably including the non-flying dinosaurs, giant marine reptiles, and pterosaurs. It is thought that the Earth was already in an unstable period due to volcanic activity in India increasing global temperatures, and the impact of a 10km wide asteroid hitting what is now Mexico was the catastrophe that tipped extinction rates over the edge. Immediately after impact, it is believed giant rocks rained from the sky, tidal waves swept across the Americas, and earthquakes unparalleled by those seen today destroyed landscapes in an explosion six billion times stronger than the Hiroshima bomb. Ash blocked out the Sun hindering plant life, and acid rain acidified the oceans.
If we worsen conditions further, what will life look like? We cannot say for certain, of course, but studying similarities between previous mass extinction events reveals that small, scavenging organisms are likely to prosper in harsh conditions. Currently we face environmental concerns that echo past extinctions; marine anoxia and reef destruction (End-Ordovician), temperature increase and sea-level rise (End-Permian), and ocean acidification (End-Cretaceous). If mammals are greatly reduced in number then, given a few million years, life on Earth could look completely different. Still, even though humanity has caused many of the world’s problems, we are advanced, adaptable and we will probably emerge just fine. For now, at least.
While this dramatic event saw the extinction of almost every animal bigger than a dog, some small mammals and birds survived, aided by their proficiency in running and hiding. Fungi also thrived post-impact, profiting off the dead and detritus. In the Cretaceous period beforehand, reptiles had occupied most ecological niches, but with most of them now gone, the surviving mammals had fewer predators and competitors. This allowed them to diversify and adapt to become the planet’s dominant animals, eventually giving rise to humans.
Artwork by Clarissa Pereira
13
COMMUNICATING THE COVID-19 VACCINE
V
Athanasios Patsalias offers tips for efficient science communication
accines save five human lives every minute. They have enabled humanity to overcome hundreds of infectious diseases so far, including measles, diphtheria, pertussis, and smallpox. Smallpox alone is so gravely lethal that one infected person would die every six minutes, had there not been a vaccine to eradicate it. Long story short, this is what we call a success in medical research. But now the tricky part: communicating this success to the general population. While explaining scientific jargon to people unfamiliar with the health sciences may seem daunting, there is a way around it: a “communication chain”, in which health professionals undoubtedly play a crucial role.
14
In order to see a light at the end of this dark tunnel of the pandemic, people need to take the vaccine and the hard outreach task in this case is doubled. Firstly, professionals need to effectively communicate the success to come, i.e. the value of vaccinations. Then, they must also communicate the safety and the efficacy of a vaccine. This task is made more diffucult by various reasons why people may hesitate to have the vaccine, be it motivated by ideological or religious reasons, safety concerns, conspiracy theory beliefs or simply the sense that they are not susceptible to the dangers of COVID-19.
However, none of these factors will necessarily lead to vaccination rejection. Research has proven that proper communication can overcome vaccination objections, even in those beyond the line of healthy skepticism. Healthcare professionals are the most trusted advisors and influencers of vaccination decisions. A recommendation from a healthcare provider is one of the strongest determinants of vaccine acceptance. A strong, confident recommendation to get a vaccine, that assumes the person is willing to be vaccinated, has been shown to increase acceptance rates. In cases where a person expresses hesitance or ambivalence after being informed of the need of COVID-19 vaccination, healthcare professionals should acknowledge and empathize with the patient’s concerns.
The objective of any vaccination conversation should then be as much to build trust and rapport, as to secure vaccination. An active listening approach on behalf of the healthcare professional exhibits the best results in easing those concerns. This is acheived by adopting the role of a confident expert, who addresses a patient’s skepticism, non-judgmentally.
Another aspect of proper vaccine communication is debunking misinformation. As always, prevention works best in this context. Because misinformation can spread far and fast, it’s best if people are ready for it. This can be achieved by a process known as “pre-bunking”, which involves pre-informing vaccine uptake candidates about how they could be misled, who the “fake experts” with false information are, and what kind of techniques they employ. In the case where misinformation has already found traction, then a health scientist’s next strategy is debunking. This strategy demands a well-equipped “armoury” of knowledge- and evidence-based arguments to correct any vaccine and COVID-19 misconceptions. Examples could include explaining why it is unrealistic to expect that any medical treatment is 100% free of side-effects, or ensuring to also point out that side effects in the case of the COVID-19 vaccine are only mild and transient.It is crucial to explain why the mistaken information was taken as correct in the first place, why it is now clear that it is false, and why the alternative is valid. Attempting to explain science facts to the general public, without the use of jargon, can seem a rather big burden to healthcare providers and health professionals. However, it is crucial to realise that the possible benefit of adding one more success in our record of busting diseases, and reinstating a COVID-free life, outweighs this burden by far. So, let us roll up our sleeves and work just a bit harder. At the end of the day, when we succeed, everyone will say, ‘chalk-it up to the scientists’.
Artwork by Daffodil Dhayaa
15
OxSci School’s Essay Competition Overall Winner HT 2021
A Levitating Frog and Scotch Tape By Rohit Antonygnaneswaran
Year 12 at Chatham and Clarendon Grammar School
His next breakthrough was isolating a 2D graphene sheet, an achievement that earned him the nickname ‘father of graphene’. Scientists at the time thought it was impossible to have a 2D lattice due to the thermal fluctuations causing such a structure to fall apart. However, he was able to prove them wrong with Konstantin Novoselov at the University of Manchester in 2004, 189 years after we were aware the special material existed. This discovery was significant as graphene has many unique properties: 200 times stronger than steel but lighter than aluminium; transparenlevitating frog and Scotch tape. Would cy of 98% but impervious to helium, the most you be surprised to know that they were both part of a scientist’s research projects permeating gas; the best electrical conductor at room temperature. You might imagine such an that led to breakthroughs in different elusive material would have required state of fields of physics? Or that both of these projects the art technology and complicated processes to won him two Nobel Prizes? (Well, one was the uncover. This was what physicist Phillip Kim was Nobel Prize in Physics and the other was the Ig trying to achieve using high-tech methods, but Nobel Prize, an award for “achievements that first Geim was able to do so using a common housemake people laugh, and then think”.) The man hold item — Scotch tape. Geim had witnessed behind all of this was physicist Andre Geim. how Scotch tape would be used to clean the tops of graphite and there would be flakes of graphite Perhaps the most remarkable fact about these left behind on the tape. He decided to repeat this projects was that they were not the culmination of process several times which led to thinner and Geim’s daily work. They were instead the results thinner flakes until eventually, they were able to of something he calls ‘Friday Night Experiments’ isolate a single graphene sheet. Geim’s paper on (FNE), short term projects that were unrelated to his this matter remains one of the most cited papers day job and were not expected to produce anything in material physics and led to him and Novoselov substantial. Having spent roughly 10% of his lab winning the Nobel Prize in physics in 2010 for time on this type of research, only 3 of 24 or so FNEs “ground-breaking experiments regarding the two have been successful: discovering direct diamagnetic dimensional material graphene.” levitation of water; creating a biomimetic adhesive that mimics gecko feet and isolating graphene. Geim has continued his research in graphene due to all the potential applications of the material Geim’s first tenured position was at the Univerwhich has the possibility of revolutionising not sity of Nijmegen in the Netherlands where he only the electronics industry but also medicine, conducted his first successful FNE. Using the energy production and many others. Just last lab’s electromagnets which have a strength of December he was able to use graphene sheets to 16 Teslas (about 10,000 times stronger than the prove Lord Kelvin’s work on the phenomenon of surface field of a typical refrigerator magnet) he capillary condensation, the magic that makes sandpoured water inside to see what would castles possible, which despite being proposed 150 happen. Unexpectedly droplets of water began float- years ago has only now been backed up ing which surprised him and 90% of his colleagues. with scientific evidence. As the first and This experiment was significant as it disproved only scientist to win both the Nobel the common belief that water’s magnetism was Prize and Ig Nobel Prize, these two not strong enough to counter gravity. A frog that prizes are certainly a testimony was placed inside, in the hopes it would appeal to to the ingenuity and creativity a broader audience and make more people aware of the physicist, Andre Geim. of this phenomenon, also floated making it the first time a living organism levitated purely due to magnetic fields. The pictures of the levitating frog, which many thought was an April fools’ joke, gained mass To enter next term’s competition, media coverage and led to Geim winning the Ig Novisit oxsci.org/schools/ ble prize in 2000, an award he says he values equally to his Nobel Prize in Physics.
A
16
School’s Essay Competition Winners: Competition winner: ‘A Levitating Frog and Scotch Tape’ by Rohit Antonygnaneswaran, Year 12 at Chatham and Clarendon Grammar School Year 12-13 winner: ‘The Unappreciated Creativity within Science’ by Lucy Kelly, Year 12 at Barton Peveril College Year 10-11 winner: ‘Creativity: Essential to Science and the Human Race’ by Tilly Arscott, Year 11 at The Woodroffe School
Runner-Ups: ‘Creativity: The Wings to Science’ by Preesha Jain, Year 12 at Chelmsford County High School for Girls. ‘How is Creativity Important in Science?’ by Theo Hawkins, Year 11 at King’s College School Wimbledon. ‘René Laënnec: Discovery of the Stethoscope’ by Rania Ocho, Year 12 at St Philomena’s Catholic High School for Girls. ‘Creativity: A Scientific Necessity’ by Sophie Beck, Year 12 at James Allen’s Girls’ School ‘The Art Of Turning Nothing Into Something’ by Nyneisha Bansal, Year 11 at Aylesbury High School. ‘The Science Behind Creativity’ by Carmen Dupac, Year 12 at Twynham School.
Judges:
Dr Joy Ogbechi
is a Versus Arthritis Foundation Fellow and researcher at The Kennedy Institute of Rheumatology. Her current research seeks ways
to restore the imbalance of immune signals in Rheumatoid Arthritis by targeting immune cell metabolism. Joy feels strongly about giving back to the community and engages in outreach and awareness programs.
Dr Samuel Cahill
is a Departmental Lecturer in Practical Chemistry based in the Chemistry Teaching Laboratory and Christ Church,
University of Oxford. His work focuses on teaching in a laboratory environment as well as the development of new teaching material for this purpose. Samuel’s keen interest in teaching has led him to engage in class and summer school teaching as well as outreach in Oxford.
Prof Angela Brueggemann
is professor of infectious disease epidemiology at the
University of Oxford. Her research focuses on understanding how changes in bacterial populations impact global health and vaccine initiatives. Angela is also involved in outreach and is keen to inspire the next generation of scientists.
17 17
The Oxford-AstraZeneca Vaccine: An Interview with Dr. Sean Elias OxSci Editorial Team For the past year the Oxford vaccine has been making headlines as one of our most important tools in the fight against COVID-19. This has been especially true in the last few months, over which the vaccine has been authorised for clinical use in the UK, India, and recently the EU. SARS-CoV-2 vaccines can be broadly categorised into viral vector, nucleic acid (DNA or RNA), whole virus, and protein-based vaccines. The Oxford-AstraZeneca vaccine is an example of a viral vector vaccine. Most SARS-CoV-2 vaccines target the viral spike protein antigen on the surface of the virus and induce both T cells and antibodies to form part of a protective immune response. Here we talk to Dr Sean Elias, a post-doctoral immunologist and public engagement leader. He works at the Jenner Institute, an Oxford-based institution founded in 2005, which has been at the heart of vaccine development.
The Oxford Scientist: Hi Sean, to start with, would you mind telling us a little bit about yourself and how you came to be talking to us about the Oxford vaccine?
18
Sean: I joined the Jenner Institute fresh from my undergraduate studies in Biology at the University of Oxford. My first job was making pre-clinical viral vectored vaccines as part of the institute’s newly founded ‘Viral Vector Core’ facility. After that, I moved on to working on human clinical trials, before starting my PhD, which was on the topic of B cell immune responses following malaria vaccination. Then, I worked on a number of different diseases, including Ebola virus and non-typhoidal
Salmonella (NTS), the latter of which I have designed and run studies for, both here in Oxford and across Africa. Since the start of the pandemic, I have been involved in a different side of the work, namely supporting our science comms and media.
You mentioned a term that has come up a lot in relation to this particular vaccine, namely ‘viral vectored’ technology. Has this been a significant factor in getting the vaccine to the public as quickly as you did? The importance of new platform technologies in the development of emergency response vaccines cannot be underplayed. It is undoubtedly one of
research groups: the Jenner Institute and Oxford Vaccine Group, which have worked together a lot in the past. This gave us access to many experienced scientists, research clinicians and support staff, all of whom dropped what they were working on to come together and run the Covid-19 vaccine study. The phase 1 clinical trial research paper cites over 350 authors, and this number only expanded as we moved forward nationally and internationally. To perform these trials, we were able to draw on our existing network of clinicians, with national support from teams in Southampton and London, as well as international support from teams in Kenya and South Africa.
the major reasons we have managed to develop a Covid-19 vaccine in record time. Traditional vaccines include attenuated vaccines, containing inactivated pathogens, or subunit vaccines, containing one or more antigens, — whichare small parts of the pathogen, such as proteins or sugars,— without the pathogen itself. . The manufacturing process for these traditional vaccines takes time. The process of attenuation, for example, involves the repeated culture of a pathogen in cell lines until they lose the ability to cause disease, which can take months or even years. On top of this, every disease requires different cell lines, conditions, and steps, which complicate the process. On the other hand, platform technologies, such as the University of Oxford’s Viral Vectored Vaccines, are often referred to as ‘plug and play’ technologies. You develop the vaccine platform in advance (think of it as a game console) and then add in the final component, your vaccine insert of choice, when you need it (think of it as a new game for the existing console). All you need to make this vaccine insert is the genetic sequence of the pathogen or one of its antigens. As genome sequencing technology has advanced the whole process, it can and, indeed, does occur within a month—from identifying a new disease to making the first vaccine. Thus, we can rapidly make and test such vaccines without having to optimise every process each time.
Are there other factors, besides technology, that make the Jenner Institute particularly suited to develop this vaccine? Several factors helped us move so quickly with development and testing. For one thing, the University has two vaccine
We also had experience on our side. Following the 2014/15 Ebola outbreak, the Jenner Institute set up our Emerging Pathogens group, which aims to respond to diseases with pandemic potential. One of the vaccines this group worked on was Middle Eastern Respiratory Syndrome (MERS), which is another Coronavirus. From this study, we knew that the coronavirus spike protein was safe and immunogenic in our vaccine platform, which gave us great confidence that we would see the same results for a Covid-19 vaccine targeting its spike protein. Finally, the University of Oxford has its own on-site Clinical Biomanufacturing Facility (CBF). They have been producing biological Investigational Medicinal Products (IMPs) for early phase clinical trials for nearly 20 years.
Looking back on it, how did the development of this vaccine compare to others you’ve worked on, especially in terms of the intensity of the work? Confining this to lab work, which I am most familiar with, we would normally have 1-2 people working on a clinical study for a single disease, with maybe 4-5 projects running at the same time. A team may process 5-10 blood samples on a busy day, and that may only happen once or twice a week. There will be busy periods but the same team does all the work, including the subsequent experiments on the blood samples. In most cases we can fit that workflow into a regular 9-5 schedule. In the pandemic all our teams have stopped working on their individual projects to focus on the Covid-19 vaccine. In a single day we may have had between 30-100 blood samples arriving from different sites. Instead of 1 or 2 people doing every job we were all split into a production line, with each person, or team of people, doing a single job. At the busiest periods some of our teams were doing 12-13 hour
19
shifts up to 8 days in a row, working both weekends and nights.
Though our clinical trials have been performed in record time, we have undergone the same rigorous steps for assessing safety as we would in other How do viral mutations come about and what clinical trials. There are a number of simple reasons for the increased speed: the removal of certain are their potential effects on vaccine immunity? barriers such as the constant need to find funding, an expedited process for reviewing clinical trial data Viruses have high rates of evolution and this is drivdue to the pandemic setting and the fact that reen by numerous factors including large population searchers and facilities have dropped work on other sizes, short generation times, and high mutation projects to focus on this one. In regards to long term rates. side-effects of the vaccine, it is important to note that Adenovirus vaccines, including ChAdOx1 have Mutations arise by chance during the replication of been used in a number of a virus’s genetic informa[...] my advice to those budding other clinical vaccine trials tion. RNA viruses tend to scientists: get in touch with those in the past and volunteers have slightly higher mutation rates than DNA virusleading the current efforts, learn from in these studies have been es, in part due to the lack them, listen to them, but also question followed up over long periods of time. of proofreading activity in them. the RNA-dependent RNA polymerases (RdRp) that are used to replicate their Finally, what advice would you give to young genomes. Coronaviruses are RNA viruses, but they acpeople, or anyone else, who has become more tually have RdRp-independent proofreading activity interested in science this year and wants to purand thus lower mutation rates compared to other RNA sue it further? viruses. With the current viral population size so large, however, we fully expect to see many new variants. You cannot beat experience in research. If you are an undergraduate considering any form of lab T cells recognise viruses from many different short science, my single most important piece of advice sequences of amino acids (peptides) and antibodies is not to rush things and gather experience before bind to viral proteins on many different parts of their moving to the next step. I went into a research assistertiary structure. Whilst a single mutation may result tant position out of undergraduate and had 3 years’ in a single T cell clone or antibody losing the ability worth of lab experience before starting my PhD. This to recognise the virus, the redundancy in the system made it so much easier and more rewarding, and I means it is unlikely that single mutations will have fully recommend this route, as would many of my a significant impact on immunity overall. In addicolleagues in the vaccine team who have done the tion to this, our immune system can adjust itself to same. combat changes in the virus. For example, our body over time optimises the antibodies it makes. A single So my advice to those budding scientists is this: get antibody type may become more effective at bindin touch with those leading the current efforts, learn ing a single variant or change so that it can recognise from them, listen to them, but also question them. multiple variants (cross-reactive antibodies). I think the biggest strength of the Oxford University student system is that it teaches students to think Mutations are being monitored closely by scienoriginally and outside the box and this is a key part tists, and it’s important we continue to remain of research. vigilant for changes in the future. The University of Oxford is carefully assessing the impact of new variants on vaccine immunity and evaluating the processes needed for rapid development of adjusted COVID-19 vaccines if these should be necessary. Special thanks to Tom Leslie, Jia Jhing Sia, Mason Wakley, Connor Forsyth for putting the articles together What would you say to reassure anyone who is
“
20
worried about receiving the vaccine and has concerns over how the relevant safety trials have taken place when the vaccine has been delivered in a relatively short space of time? There are particular concerns about side effects on the longterm, which might not be detected during the fast-paced trials conducted.
Artwork by Anoop Dey
Head over to oxsci.org for the full 2 interviews with Dr. Sean about the development and the science of the vaccine!
The Key to a Harmonious Society
Rosa Parker unlocks the mysteries behind music’s place in our culture
M
usic soundtracks every aspect of our life; be it lullabies, a club classic, the first dance at a wedding, or the final curtain at a funeral. It has the power to lift or destroy moods, and to convey emotion without words. Music is also puzzling in its ubiquity: it is present in every studied human culture. Music-making was either evolutionarily advantageous, or a random by-product of evolution. So, what advantages did it offer our predecessors? And if it offered none at all, how exactly did it evolve? Darwin was the first known person to ponder the evolutionary origins of music. In his book, The Descent of Man, he attributes its emergence to sexual selection. Sexual selection is a form of natural selection, where certain desirable characteristics are selected for in the opposite sex: in this case, the desirable characteristic of music-making. Darwin thought that the expression of complex emotions, such as love through music-making, would be advantageous in attracting a mate. The idea of music influencing mate selection gives an answer as to how music-making is so common in humans and primates, yet it only resolves the question on a mechanistic level. It fails to encompass the phenomenon of social bonding.
The effect of music in enhancing group social bonding is exemplified in many aspects of human culture, from hymns to football chants, and campfire songs to national anthems. A paper published in 2016 by Weinstein et al. records that choir singers had a higher pain threshold after performing as a group than non-choir singers. Collectively creating sound and harmonies releases dopamine, a chemical responsible for feelings of pleasure and reward. This can lead to participants experiencing positive feelings of mutual accomplishment, and pain relief. These mutual feelings encourage cooperation between individuals by increasing a feeling of connectedness to others and interpersonal liking. This leads to the strengthening of social bonds, which is crucial for facilitating group cohesion: there are costs to living with others, such as increased competition for mates. Living in a large group, though, can provide survival advantages, such as a decreased risk of predation. It is essential to keep up morale and cooperation to maintain stability within the group, and it is possible that this tool for bonding paved the way for the evolution of our own Homo sapiens.
Parent-infant bonding is another hypothesis to explore. The pitch of lullabies is very similar across different cultures: we talk to children in a specific way, perhaps due to the parenting success this may facilitate. Silence is often perceived as a sound of danger in nature, so a parent soothing their offspring with song can alleviate stress and help bonding. These ideas are not mutually exclusive, and are tied together in a 2020 paper by Savage et al., which proposes that music and genes coevolved for social bonding: the ability to make music that could promote social bonding was genetically and culturally selected for, and music has subsequently evolved to become more complex over time. More recently, in a preprint published by David Schruth in December 2020, a more subtle route was taken to explain the emergence of music. Schruth and his colleagues analysed sounds produced by primates, discovering almost every primate family made ‘music-like’ calls. They defined music as the ability to make a call with 2 different repeated notes, aiming to reduce bias when studying this phenomenon: it is easy to disregard the repetition of two notes in comparison to the complex melodies we can produce today. Schruth’s approach, however, led to the conclusion that the common ancestor of all primates must have been able to produce a call with ‘music-like qualities’. He hypothesised that singing demonstrated control over vocal cord muscles, indicating to other primates that the singer was acrobatically adept, and therefore physically fit. This distinction could have driven sexual selection, as a skilled mate could provide food for their offspring or defend their territory. Finally, none of these views necessarily conflict. Whilst music may have initially been selected for as an indicator of agility, its propagation and ubiquity could be a result of the strong social bonds it helps forge. It is remarkable that a trait capable of causing profound emotional responses may have emerged for a different reason, and that music could have coevolved with humans to shape and stabilise our society as we know it today. Looking far into the future, could music-making facilitate new means of communication, or lead to the formation of a more harmonious society? We are still evolving, and music is no different.
21
Artwork by Des Poonca Y Pacanabia
Leave It To The Ants Jake Burton goes on the trails of the fascinating world of ants
W
e are fascinated by ants—so much so that we have made entire Hollywood movies about them. But what could be the cause of this fascination? Perhaps it is because we see so much of ourselves in them.
A Bug’s life Like us, ants are highly social creatures and are capable of many human-like behaviours, including ‘farming’ aphids and burying their dead. However, no single ant is able to carry out these behaviours alone. Instead, they arise out of the far simpler behaviours of individual ants. When complicated behaviours like this arise from the interactions of simpler ones they are said to be ‘emergent behaviours’. As with human societies, finding food is a key concern for ants. Even the way ants forage for food demonstrates a remarkable emergent behaviour. Though ants have the impressive ability to carry 10-50 times their own body weight, a single ant often requires help returning food to the colony. To rally fellow ants around the food transportation cause, an individual ant will release pheromones — chemicals which can
22
be sensed by other members of the same species. These are known as ‘trail pheromones’, since they are left on the trail which leads back to the colony, allowing other ants to follow and locate the found food. Once released into the environment these pheromones don’t hang about, they degrade over time. But rather than being a hindrance in locating food, this property of pheromones helps ants solve an important problem — finding the shortest route between the food and the colony.
path to the food then, ants only need to favour the paths with the strongest pheromone trail. This is a particularly nifty solution to a thorny problem. Not just for the ants, but for us humans too. So good is the method in fact, that we have borrowed it entirely for our own purposes. With one small tweak. We use virtual ants.
Yes, Virtual Ants!
A computer can be programmed to simulate the behaviour of a large number of ants travelling from, say, London to Newcastle, via either Birmingham and Though initially there may Sheffield, or Doncaster. Like real be several pheromone trails ants, at every step of the journey leading to the food, the trail the virtual ants choose which which allows ants to go backwards and forwards the fastest way they will travel, here either will naturally build up a higher via Birmingham and then Sheffield or just Doncaster. They do concentration of pheromones over time. Simply because more this based on the strength o f ants will be able to travel this way in a fixed time period, and each one of these ants will be leaving a new set of pheromones. To pick the shortest
the ‘pheromones’ assigned to each route. Just like real ants will, the virtual ants deposit (virtual) pheromones along the route they took. Over time,
whichever route brings ants to Newcastle the protein, and this determines the amount of fastest (here, the shortest number of steps) will ‘pheromones’ assigned to the chosen sequence. end up with the most pheromones and is the best route from London to Newcastle. Given enough virtual ants, the algorithm conIf this problem of connecting cities together as verges to high affinity peptides which can then quickly as possible sounds familiar, that’s because be used as potential drug candidates. it is a well-known problem in Computer Science, known as ‘The Travelling Salesman Problem’. Ant colony optimisation, as this technique is The problem is well known, though not because generally known, can also be applied to much Computer larger peptide Scientists ... it also works for problems which at first glance sequences, such spend a seem nothing like connecting up points as as for deterlot of time mining the 3D quickly as possible. wishing structure of they were whole proteins. salespeople. Rather, because the problem of However, in this research area the method is connecting sets of points (which have ‘distancnow losing out to newer computational teches’ between them) together in the shortest way niques, like deep learning. Deep learning, and possible crops up in a lot of unexpected places. neural networks more generally, joins particle swarm optimisation (inspired by the flocking Connecting points is useful for designing behaviour of birds) and genetic algorithms (inelectricity networks, planning delivery drivers’ spired by evolution), to name just a few of the routes and similar tasks. But it also works for computational approaches already borrowed problems which at first glance seem nothing from nature. like connecting up points as quickly as possible. One of these is the design of small peptide So perhaps we can say with some certainty: drugs — short sequences of 5-10 amino acids ants won’t be the last creatures whose ideas that are used to inhibit protein-protein interwe steal. actions. These are used today for treating a number of conditions, including HIV, but it is hoped that such drugs could, if able to be designed in a rational way, be used against a whole range of diseases.
“
Our virtual ants can help out here too. Rather than choosing a sequence of cities to visit, each ‘ant’ chooses one of the 20 amino acids for each step in its journey. Once they reach the end (i.e., they have chosen as many amino acids as you would like in your peptide), the sequence of visited amino acids is scored for how well the peptide binds to the corresponding
3 2 23
From Reasoning to Revolution: The Emergence of the Scientific Method Helen Collins journeys through the history of science
S
Demonstratio longe optima est experiential. By far the best proof is experience. -Sir Francis Bacon, Novum Organum 1, 70, 1620
at at his archaic desk, writing by candlelight, English philosopher Sir Francis Bacon penned the first definitive account of the scientific method. Published in 1620, Novum Organum quickly became famous as the origin of the scientific method—what we now know as the rigorous process by which scientists acquire new knowledge. Although the term itself did not appear until Francis Ellingwood Abbot’s 1885 book Scientific Theism, in Novum Organum, Francis Bacon set out the process by which observations of the natural world are used to formulate hypotheses, which can then be either confirmed or rejected through experimentation and measurement. Over time, knowledge amalgamates with evidence from several experiments and observations to form general theories. Indeed, one of the key factors that distinguishes true science from the murky world of pseudo-science is the focus on experimentation and observation in order to prove or, critically, disprove a hypothesis. The emergence of the scientific method has defined major discoveries that have shaped the way we see the world and still drives scientific research today. Despite being described by philosopher Voltaire as the ‘father of [the] scientific method’, Francis Bacon was far from the first to discuss or use this approach. In fact, the concept can be traced as far back as the Ancient Egyptians. The Edwin Smith papyrus, one of the oldest known medical documents, contains details of the examination, diagnosis, treatment and prognosis of hundreds of illnesses—principles that parallel modern day approaches to science and medicine. However, the Ancient Egyptians generally lacked rational theories to explain their findings, prescribing many “cure-all” treatments such as bloodletting (the removal of blood in the hopes of curing a disease or illness) and animal dungs, often with little or no evidence of their effectiveness.
24
One of the earliest proponents of empiricism, the theory that knowledge is derived from experience and evidence, was Ancient Greek philosopher Aristotle. Aristotle was instrumental in the founding of many modern scientific disciplines, such as zoology, psychology and physics, and believed that reason alone was
sufficient to solve a scientific problem. Aristotle used deductive inference—the process of using a general theory to explain specific observations and draw conclusions—to propose many hypotheses about the natural world, including that all things are made up of four elements (earth, water, air and fire). Along with Galen’s ‘Four Humours’ theory that dominated early medicine, Aristotle’s work was the basis of scientific thought for hundreds of years, enduring well into the Middle Ages in Europe. It wasn’t until his work was reinterpreted and questioned by Robert Grosseteste that the flaws in Aristotle’s use of deductive reasoning were realised. Seminal work by scientists of the time, including the early anatomical studies of cadavers by Andreas Vesalius and the observations on gravity by Galileo Galilei, contributed to the decline in Aristotelian theory, and instead spurred the adoption of practices resembling the modern scientific approach. However, while many European scholars were still enamoured with Ancient Greek theorists, several Islamic philosophers were developing methodologies that much more closely resemble the modern scientific method. For example, Ibn al-Haytham, a Muslim scholar who in c.1000 published his ground-breaking experiments in optics, is often referred to as the first person to conduct an experiment using the scientific method due to his diligent use of empirical methods and experimentation to distinguish between competing theories. Other scholars of the time included Ibn Sina (also known as Avicenna), who discovered that diseases can be contagious, and Jabir ibn Hayyan, often considered the “father of
chemistry”, who greatly influenced the later work of medieval English philosopher Roger Bacon. This shift from inductive reasoning, which is the derivation of general principles from specific observations rather than using observations to prove pre-existing theories, to experimentation marked the end of Aristotle’s dominance of scientific research. The Scientific Revolution, which is generally accepted to have started with the publication of Nicholas Copernicus’s 1543 book De revolutionibus orbium coelestium (detailing his heliocentric theory of the universe), marked a great turning point in the history of science. Around this time, scientists such as Isaac Newton, Robert Hooke and René Descartes were using the scientific method to make pioneering discoveries and develop ground-breaking theories, from the laws of motion to microbiology, that shaped our understanding of the world. With the introduction of printing in the 14th century, it was easier than ever for knowledge to be disseminated, and many more scholars began adopting the scientific method. In the centuries that followed, the first controlled clinical trial was performed, when in 1753 James Lind studied how to prevent scurvy among sailors. 1885 saw the first descriptions of blinded, randomised experiments, and in 1950 the first double-blind experiment was published by Theodore Greiner, meaning an experiment in which neither the participants nor the experimenter knew who was receiving the active treatment, a practice which now forms the backbone of experimental clinical research. These key advances in the way scientists were conducting their research allowed them to make more robust discoveries, supporting ideas such as natural selection and astronomy, as well as paving the way for even greater advances. In 1953, this process permitted the observation of the structure of DNA, changing biological sciences for ever, and incredibly, in 2009 the first robot to independently discover new scientific knowledge was manufactured, marking the beginning of a new era of scientific research. However, the scientific method is still evolving, with many additional aspects being adopted over the years. For example, in 1934 philosopher Karl Popper declared that a hypothesis must be falsifiable, meaning that it
can be proved incorrect, in order to properly follow the scientific method. This is exemplified by the modern focus on the alternative hypothesis—that is, for a new theory to be accepted, the old one (null hypothesis) has to be rejected. This epitomises the modern use of scepticism to drive scientific research. The importance of reducing systematic errors has also been emphasised, even being discussed as early as the 11th century, when Iranian polymath Al-Biruni described using repeat experimentation to reduce the errors made when using equipment, or observational biases. Furthermore, the reproducibility crisis in many fields of science has demonstrated the need for experiments to be replicated by others. As Andreas Vesalius said in 1546, ‘I am not accustomed to saying anything with certainty after only one or two observations’, highlighting an even earlier knowledge of the importance of reproducibility. These issues have led to the creation of the field of meta-science, or the study of science itself, to increase the quality of scientific research. Of course, it is only right that there is also a moderate amount of scepticism surrounding the scientific method. In 1962, philosopher Thomas Kuhn published his critique of the assumptions about the progress of science through history, claiming that the actual method used by scientists differs from the classic model, and that the preconceptions of an experimenter can greatly affect their observations and thus their conclusions. Philosopher David Hume even went as far as to believe that there is no necessity for the future to resemble the past, and so using inductive reasoning to predict the future is not justified, highlighting the so-called ‘problem of induction’ that continues to plague the scientific method. In such unprecedented and unstable times, when the scientific findings are often interpreted incorrectly or are maliciously spread, reminding ourselves of the origin of scientific study is reassuring. It took well over two millennia for a coherent scientific method to emerge, even with its modern-day contention and its increasingly nuanced requirements. Teaching others about the scientific method, in all its robustness and rigor, is just one way to dispel the pseudo-science and “fake news” which often dominate our newsfeeds, but also, ultimately, to reinstate trust in scientific research.
Artwork by Alex Kahn (Francis Bacon Illustration) Lauren Keiller (Aristotle Illustration)
25
The constraints of free will Bianca Pasca draws parallels between the Philosophy of Mind and Chemistry
A
sking about the emergence of consciousness may be a bit like asking about the divide between life and inanimate matter: where do we draw the line? The imposition of boundaries and constraints mostly seems to limit us—but maybe it can also allow our freedom to choose. There is no clear-cut answer to how our thoughts form, but a relatively new and unexpected field could help us think about it in a different way. Emerging as a subject in its own rights only towards the very end of the 20th century, the philosophy of chemistry may shed a new light on problems encountered in the philosophy of mind. Throw in a bit of quantum mechanics, and we can even start thinking about constraints… on our free will. Popular models of consciousness Trying to talk in the third person about the source of our subjective experiences of the world has always been doomed to stir up disagreements. It comes as no surprise, then, that one of the hardest problems the philosophy of mind is dealing with is called, well, the ‘hard problem’ of consciousness. As David Chalmers puts it, this is the question of how and why we have individual, subjective experiences—essentially, how consciousness emerges out of physical states. There are two models through which we can approach the problem of consciousness. Standing at opposing ends of each other, these are the views of reductive materialism and dualism. The first one, as its name suggests, attributes to each of our mental states a physical brain state, which is essentially the cause of that particular mental experience. This is what we call ‘upwards’ or ‘bottom-up’ causation: lower-level parts, such as atoms or cells in the brain, and chemical substances in our body cause higher-level, more complex phenomena— in our case, conscious experience. Unsurprisingly, the reductionist view is slightly more popular within the scientific world than the dualist viewpoint, which suggests that the mind is a completely different entity from our bodies. This type of view was held since Antiquity, from Aristotle’s theories of body and soul, to Plato’s belief in metempsychosis, i.e the migration of a soul to a different body. It has come to be known in the form we discuss today because of René Descartes and his argument that the mind is non-physical.
26
Many scientists and philosophers argue for a middle way between pure materialism and dualism, and one of them is emergentism, which could be described as a non-reductive form of materialism. Put simply: emergentists believe that the ‘whole’ can not be wholly reduced to its parts. The parts themselves don’t have the same properties they had when they weren’t interacting and indeed, new properties can emerge from their coming together. Some might even argue the relationship works the other way around, too: the complex form, in its turn, can cause new properties in the lower-level parts. To put it shortly, this would be ‘downwards’ or ‘top-down’ causation. This way, we end up with a picture where the parts and the whole co-define each other. Enter: the philosophy of chemistry But we have had enough fancy terminology. Where is all this heading? As promised, from the philosophy of mind, we will make a stop in the philosophy of chemistry. Chemist and philosopher Joseph E. Earley is of the opinion that the problem of consciousness and other debates about our mind could perhaps find resolution by adopting some of the concepts accepted by chemists. Let’s think about a few examples. Earley himself, in one of his works, argues Why there is No Salt in the Sea. Even though we can obtain it from sea water, that says nothing about salt being found, as we understand it, in brine. In solution, sodium chloride, for example, is present in the form of charged ions, hydrated by the surrounding water molecules. This is no longer the solid state salt we are thinking of, neither is it the same as pure water. Its properties aren’t simply the ones of its constituent parts, summed together, either. However, the choice of salt obviously matters: lithium fluoride, LiF, as an example, is much less soluble in water. The explanation for this has to do with both the stronger interaction between the lithium and fluoride ions in their solid lattice structure, and the way the solvated ions interact with the water molecules; in other words, with both the initial state of the salt and its final form, in solution. Thinking about ‘downwards’ causation, we could give molecular orbitals as an example, since they are approximated as linear combinations of the atomic orbitals, i.e the orbitals of atoms which make up the molecule itself. From molecular orbital theory’s perspective, electrons surrounding each atom are no longer localized in the individual atoms’ orbitals, but they are shared between
them (in different proportions, depending on how much the different atoms attract electrons to themselves—that is, on how electronegative they are). Thus, they occupy the molecular orbitals of the ‘whole’, the molecule. This implies that the environment of an individual atom depends on what the rest of the molecule is made of, what other atoms contribute to it, and so on. It could therefore be said that the complex molecular structure has, in its turn, an effect on the parts which compose it. What is more, in chemistry, the outside environment itself, or the apparatus used, can be important and influence the outcome of the reaction. So how can all this be relevant to our discussion about consciousness? Well, as with the many complex interactions between parts and wholes in chemistry, it seems unlikely that the problem of the emergence of conscious states can be so easily reduced to physical states in the brain. The philosophy of mind deals with the relationships between the physical ‘parts’ of the brain and the conscious ‘whole’ (for those eager for even more terminology: the theory called ‘mereology’). This is why we can find new ways of looking at these concepts by studying the same relationships in the philosophy of chemistry, as discussed above.
It was the outside environment, triggering a certain response from us, which shaped the way constraints, and thus electron flow and brain states, varied over time. Since these processes are also linked with our psychological make-up, this again implies a sort of ‘top-down’ relationship, whilst not denying the ‘bottom-up’ approach, either. Given that so many things appear to be interconnected, this might mean there is more room for choice and free will. If chance and outside factors determine the way we think, and if our thoughts have any causal power, does that unshackle us from the seemingly pre-ordained, deterministic path? Or does it mercilessly leave us wander ‘anywhere the wind blows’?
Consciousness and free will Coming back to our question about free will, we should first establish whether mental states can cause a physical state. A certain view, called epiphenomenalism, holds that they can’t; for example, you don’t react in a certain way because you are in pain—the pain was caused by a physical event and this is what causes the reaction, not the belief or thought that you are in pain. Mental properties are thus considered to be simply by-products of the functions of our brain. Thomas Huxley famously compared our consciousness to the whistle of a locomotive that doesn’t influence the working of its engine. If we accept this, there is no place for free will, and our thoughts and hopes could never be the cause of anything else in our lives, except maybe other thoughts and hopes. But what if the possibility of free will can be sustained by constraints? That’s right: it might just be that we are free to choose because of certain constraints, albeit at the micro-level. The behaviour of particles, the information about which we derive from the Schrödinger equation, also depends on the constraints and boundaries imposed. These alter the solutions to the equation, which practically encode the particles’ actions. The same thing happens inside our bodies, where particles are constrained by the molecular shapes of proteins. Reshaping neural connections as we learn something, or triggering a response in our bodies that leads to a change in a protein’s conformation, for example, changes the constraints the particles are subjected to, in time. This means their behaviour is not so easily explicable.
Fig.1
27
Transformation of global attitudes emerging from the Covid-19 pandemic Sarya Fidan investigates the socio-political implications in a post-pandemic world The Covid-19 pandemic has brought our existing global health strategies into question, exposing health inequalities and long-standing weaknesses in healthcare systems. After emerging from this crisis, new social perspectives and changing global attitudes will redefine the future of scientific development, public health policies, and political influence on medicine. To start with, the concept of transmissibility has highlighted the primitive co-dependency of the human race. The importance of social distancing and the necessity of preventative health behaviours have shown us that we survive as a population, in which we are all linked and responsible for each other’s survival. This can be extended to a global level. The pandemic is proof of how interconnected and interdependent our world has become: there is no way of tackling this crisis at a national level only. We must rely on international cooperation and multilateralism, and realise that solidarity is in all of our interests. The pandemic has also illustrated our common vulnerability, across borders and beyond public-private divides. We have seen that our fragmented approach to development is flawed—we are in need of a more holistic and coordinated approach. The virus knows no borders, and neither should our approach to research and development. International support is a matter of collective survival and an investment in the future of public health, assuring equitable access to the health products needed to fight Covid-19. Thus, the future of science and medicine is surely international and interdisciplinary.
28
Emerging from the pandemic, we should strive towards intergovernmental research. Sharing knowledge and experience accelerates learning and facilitates more rapid progress. To understand pathogens, their transmission, and the diseases they provoke, accumulated knowledge and experience of researchers and public health professionals is essential. In January 2020, Chinese researchers rapidly published the first genome of the coronavirus, and the genetic map was made available freely around the world. At MIT, engineers made the design for a low-cost ventilator freely available, which was used by a group of Indian engineers racing to ease the country’s ventilator shortage. Such scientific global cooperation is highly inspiring given that academic medical research is usually shrouded in secrecy to secure grants, promotions, and tenure.
The search for a vaccine has also exemplified the potential of success when we use an international approach to scientific development and form global coalitions for global problems. The Coalition for Epidemic Preparedness Innovations (CEPI) is a global partnership which has been at the centre of the global cooperative effort to develop Covid-19 vaccines, with funding from governments worldwide. It made crucial early investments in vaccine research and in building infrastructure to facilitate mass production of vaccines, including both the Moderna and the AstraZeneca vaccine. It is a powerful example of how international research collaboration can accelerate scientific development. When viewed through a more political lens, however, the pandemic has also given rise to patterns and feelings of nationalism. Vaccine inequality is beginning to manifest in the distribution of resources across global borders. The UK has already acquired enough doses to vaccinate more than five times its own population, whilst a study by the Economic Intelligence Unit revealed that 84 low-income countries may not have widespread access to vaccination until 2024. Such nationalistic approaches will only hinder our emergence from this crisis. However, the global outcry about the lack of equipment and supplies to test for and protect against Covid-19 will most likely lead countries to re-examine their supply chains for critical health and livelihood-related products. This state of panic may also promote nationalism surrounding the production of pharmaceuticals, medical supplies, and equipment domestically, as nations have been forced to increase their capacity to cope with extended periods of economic self-isolation. High income countries, having suffered huge economic losses, could use Covid-19 as the excuse to cut development assistance for health and reframe global health as a limited “national security” exercise, retreating into selective self-sufficiency and overt geopolitical competition. The pandemic is already being framed by rightwing populist leaders as a call for isolationism, anti-immigration policies, and institutionalised racism. All of these are likely to increase inequities that already plague global health, and further concentrate power amongst the elite in the global north. It is essential to highlight the importance of robust social security policies in minimising health inequities. Eco-
nomic hardship, as has been seen throughout the pandemic, increases individuals’ and communities’ vulnerability to Covid-19. It also exacerbates ill- health more generally and widens social inequalities, as seen in the last decade across England. Too many lives were lost in this pandemic because of the weak social and economic systems that had predisposed people to poor health. With increasing rates of unemployment, social security provision by the government is essential in mitigating the disproportionate effects of Covid-19 on the working- class, and in preventing vulnerable groups falling into further economic hardship. The pandemic has shown us that long-term social protection measures form the backbone of a health system that is equitable for all. This negative relationship between the government handlings of the pandemic and its political approval can be seen quantitatively in a study conducted by the Centre for Economic Policy Research. A one standard-deviation increase in weekly case growth is associated with a 3.6% decline in political approval, compared to the pre-pandemic approval level. In particular, the impact of case growth on approval is larger if governments did not implement tough policies to contain the infection. In other words, governments that opted for loose policies to reduce the economic damage of the pandemic face a larger loss of support. One way in which this growing distrust in the government's intentions and policies has presented itself is through vaccine hesitancy. Despite vaccine production being considered one of the greatest achievements of public health, vaccine hesitancy is now more prominent than ever before. For example, a recent poll commissioned by the Royal Society for Public Health (RSPH) shows that vaccine hesitancy is highest among the BAME community. This is a sign that hesitancy about the vaccine is disproportionately high in some of the communities that are hardest hit by the pandemic, pointing out the inequality in our social and economic systems. Vaccine hesitancy is a threat to public health in the future and poses the question: will a post-Covid world necessarily mean a post-pandemic world? For the first time since WWII, we have collectively been brought into contact with our own mortality and fragility. The pandemic will contract, but the polarised social and political environment may still linger with us for a long time and support new disease outbreaks.
29
Emergence from Cholera: How Physics Saved London by Anna Lappin Throughout the pandemic, developments in vaccines, treatments, and transmission models have illustrated the importance of biology, chemistry, and mathematics. It is harder to see the relevance of physics. As a reminder of how different disciplines of science work in tandem to save lives, consider a historical example: London’s cholera outbreaks. During the Industrial Revolution, London became overcrowded. Sewage clogged the Thames, dumped there via patchwork systems of sewers and overflowing cesspits. This water was pumped back into the city for people to drink. This wasn’t seen as an issue, as many believed foul smells, not water, spread disease. The water-borne theory of disease was not taken seriously until 1854, when John Snow’s cholera maps identified a water pump in Soho’s Broad Street as the source of a cholera outbreak. Snow’s removal of the pump handle is believed to have ended the local outbreak, but it didn’t solve the underlying problem. To prevent future outbreaks, infrastructure to pump both water and waste across the city was needed.
Enter Physics, in the form of steam power Contrary to popular belief, steam power was first used to drive water-pumping engines, not locomotives. Steam engines had been used to pump water since 1712, when they were invented for use in flooded mineshafts. By the 1820s, steam-powered beam engines were pumping ‘drinking’ water out of the Thames. This water would not be considered potable today, as it was contaminated by waste brought upstream by the tide. This ceased after 1855, when the Metropolis Water Act forced companies to abstract water upstream of Teddington Lock (beyond the tidal reaches of the river).
30
As clean water was pumped into the city, contaminated water also had to be pumped out. The creation of the sewers was catalysed by John Snow’s findings
and by the ‘Great Stink’, when drought reduced the Thames to a flow of sewage and industrial effluent. Sir Joseph Bazalgette’s design for a new sewer system, which is still in use today, relied upon steam engines to pump waste out into the Thames estuary.
How does a steam engine work? Early steam engines, ‘beam engines’, used a principle known as the Cornish Cycle. Steam is fed into the top of a cylinder, pushing a piston down, until an equilibrium valve opens—allowing the steam to flow into the chamber directly beneath the piston. From here, steam enters a condenser (where it is surrounded by cold water), rapidly cools down and liquifies. The resulting drop of pressure creates a vacuum. So, the piston is ‘pushed’ by the steam inputted above it, and ‘pulled’ by the vacuum below it. A counterbalancing weight at the pump-end of the beam allows the piston to rise again, restarting the cycle. Advancement was hardly linear, but one could consider the rotative engine the next step. Bazalgette’s sewage pumping engines at Crossness serve as an example. These utilised Watt’s parallel linkage to turn a large cast-iron wheel, a ‘flywheel’. The flywheel acts as a sink for rotational kinetic energy, preventing the piston from becoming stuck at ‘dead-points’ mid-cycle. This evens out the flow rate of the water, reducing the stresses on the pipes, thus increasing the longevity of the infrastructure. Here, rotational mechanics is crucial to prevent flywheels ‘running away’. If the engine speed is so great that the tension in its structure can no longer provide the required centripetal force for the rotation, the flywheel will explode. Shrapnel would destroy both the pumping station and surrounding buildings.
The early development of steam engines proceeded largely by trial and error, inspiring advancements in fields such as thermodynamics. The shockingly low efficiency of steam locomotives inspired Carnot’s theorem, which relates the efficiency of an ideal engine to the difference in temperature between the steam entering the system and the steam leaving. Carnot’s theorem is demonstrated in the design of the triple expansion engine. Steam passes through three cylinders in turn so that it leaves the system at a lower temperature than it would if a single cylinder were used. As London’s population grew, more powerful and efficient engines were needed to pump water affordably, so triple expansion engines became the dominant technology.
Advancements by the likes of John Snow showed how cholera could be defeated, but advancements in physics made this a reality. What really ended the cholera outbreaks was the waste and water pumping infrastructure—not the removal of a pump handle.
Whilst useful, Carnot’s principle relied on incorrect physics, viewing heat as a liquid.
The error in ‘trial and error’ The lack of precise physical understanding behind this process of ‘trial and error’ led to issues in the 1850s. For decades, a system of two rotating balls, a ‘centrifugal governor’, had been used to prevent engines over-speeding. As the engine speed increased, the balls would rotate faster and so be thrown further outwards. This would automatically reduce the steam flow to the engine. Were a governor to shut down an engine unnecessarily, it could take hours to restart it, leaving people without water. Were an over-speeding engine not shut off, a flywheel could ‘runaway’. So, when the governors of newly designed engines started to fail, the water supply was placed in jeopardy. No suitable explanation could be offered until theoretician Vyshnegradsky used non-linear ordinary differential equations (equations relating rates of change of physical quantities) to model the steady-state of the system. He thus identified the changes in design that had caused this problem, using precise physics to secure the supply of potable water.
The end of cholera in London London’s last cholera outbreak struck in 1866. 93% of the fatalities were in the only area of London with an incomplete sewer system, demonstrating how the sewers had revolutionised public health. Within a few decades, cholera was so rare within the UK that it was considered ‘exotic’—a drastic change for a disease that claimed over 30,000 lives in 1831 alone. Artwork by Anoop Dey
31
32