BlueSci Issue 44 - Lent 2019

Page 1

Lent 2019 Issue 44 www.bluesci.co.uk

Cambridge University science magazine

FOCUS

The Earth as a Living Laboratory

Neutrinos . Mayan Collapse Neural Networks . Core Formation


Your Power for Liquid Handling

Sapphire Pipettes from Greiner Bio-One

for further information or to request a brochure: tel: 01453 825255 email: sales.uk@gbo.com

YE

ARS

S

3

Y RANT WAR 3 YEAR

Sapphire the ultimate instruments for liquid handling

www.gbo.com


Contents

Cambridge University science magazine

Regulars

Features 6

The Future of Earth is Up in the Air

On The Cover News Reviews

James Weber explains the role of positive feedback loops and how they could lead to runaway environmental disaster

18

8

Drought and the Collapse of the Maya

10

How the Antarctic is used as a Neutrino Detector

12

Let’s Talk About Soil

14

16

24

James Kershaw discusses whether new data is raining on, or could prove, this fashionable hypothesis

Maeve Madigan discusses how and why we can leverage Antarctic ice to find some of the most elusive particles in the known Universe

A Digital (R)Evolution

FOCUS THE EARTH AS A NATURAL, LIVING LABORATORY

Kasparas Vasiliauskas looks under our feet at some of the Earth’s most overlooked material

Charles Jameson examines neuroscience’s role in solving the most difficult computational problems

BlueSci presents three perspectives on how scientists have expanded our understanding of science using the greatest laboratory of all – planet Earth

28

Mrittunjoy Guha Majumdar talks Bohmian mechanics

A New Possible Mechanism for the Formation of our World’s Core

Gareth Hart speaks with Dr Madeleine Berg and Dr Geoff Bromiley about evidencing a new hypothesis

Sulawesi: A Seismological Mystery

Science and Theatre:The Wider Earth

30

Weird and Wonderful

32

A Bohmian Rhapsody

Ben Johnson speaks to Professor James Jackson on how it happened, and how we could prepare for future incidents 26

3 4 5

Our Global Food Supply and the Rise of Synthetic Meat

Sophie Cook asks whether lab-grown meat can save the planet

BlueSci was established in 2004 to provide a student forum for science communication. As the longest running science magazine in Cambridge, BlueSci publishes the best science writing from across the University each term. We combine high quality writing with stunning images to provide fascinating yet accessible science to everyone. But BlueSci does not stop there. At www.bluesci.co.uk, we have extra articles, regular news stories, podcasts and science films to inform and entertain between print issues. Produced entirely by members of the University, the diversity of expertise and talent combine to produce a unique science experience

Laura Nunez-Mulder follows the play that follow the voyage of Charles Darwin after he graduated from Christ’s College, Cambridge Man-made moons, heavy Bitcoin mining and emoji-speak, we bring you the strangest stories from recent literature

President: Seán Thór Herron ����������������������������������������������������������������������� president@bluesci.co.uk Managing Editors: Alexander Bates, Laura Nunez-Mulder.........managing-editor@bluesci.co.uk Secretary: Mrittunjoy Majumdar.......................................... �������������������������enquiries@bluesci.co.uk Treasurer: Atreyi Chakrabarty �������������������������������������������������������������� membership@bluesci.co.uk Film Editor: Tanja Fuchsberger ������������������������������������������������������������������������������ film@bluesci.co.uk Radio: Emma Werner.............................................................................................radio@bluesci.co.uk News Editor: Elsa Loissel �������������������������������������������������������������������������������������news@bluesci.co.uk Web Editor: Elsa Loissel.............................................................................web-editor@bluesci.co.uk Webmaster: Adina Wineman.....................................................................webmaster@blueci.co.uk Art Editor: Serene Dhawan.........................................................................art-editor@bluesci.co.uk

Contents

1


Issue 44: Lent 2019 Issue Editor: Silas Yeem Kai Ean Managing Editors: Alex Bates, Laura Nunez-Mulder Second editors: Amanda Buckingham, Emma Dinnage, Nick Drummond, Alex Ekvik, Sarah Foster, Aoife Gregg, Matthew Harris, Seán Thór Herron, Leia Judge, Haskan Kaya, Yulong Lin, Mona Liu, Alex Sampson, Dan Sayle, Caitlin Walker, Amy Williams, Silas Yeem, Matthew Zhang Art Editor: Serene Dhawan, Laura NunezMulder News Team: Seán Thór Herron, Elsa Loissel, Nelli Morgulchik Reviews: Hollie French, Matthew Harris, Nelli Morgulchik Feature Writers: Sophie Cook, Gareth Hart, Charles Jameson, Ben Johnson, James Kershaw, Maeve Madigan, Mrittunjoy Guha Majumdar, Laura Nunez-Mulder, Kasparas Vasiliauskas, James Weber Focus Team: Hannah Bryant, Andrea Chlebikova, Bryony Yates Weird and Wonderful: Edmund Derby, Victoria Honour, Bryony Yates Production Team: Alex Bates, Laura NunezMulder, Seán Thór Herron Caption Writer: Alex Bates Copy Editors: Alex Bates, Seán Thór Herron, Laia Serratosa, Laura Nunez-Mulder, Silas Yeem Advertiser: Christina Turner Illustrators: Alex Bates, Serene Dhawan, Dylan Evans, Seán Thór Herron, Anson Lam, Nah-Yeon Lee, Sean O’Brien, Eva Pillai, Ben Tindal Cover Image: Hayley Hardstaff ISSN 1748-6920

This work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported License (unless marked by a ©, in which case the copyright remains with the original rights holder). To view a copy of this license, visit http://creativecommons. org/licenses/by-nc-nd/3.0/ or send a letter to Creative Commons, 444 Castro Street, Suite 900, Mountain View, California, 94041, USA.

2

Editorial

Global Science It boggles the the mind to think how much the ways we inquire of the natural world have changed in the last century – let alone the last two millennia. The Large Hadron Collider continues to smash protons at near-light speeds in the continuing search for fundamental particles of our universe, and plans for an even larger collider are now in the works. Molecular clock techniques and bioinformatics are being used to predict how the earliest animals could have looked like. And just recently on New Years’ Day, the space probe New Horizons made a historic flyby to Ultima Thule, a Kuiper Belt object 6.5 billion miles away, making it the furthest object we have ever visited in our solar system. In this issue we revisit the beginnings of the natural sciences: the direct study of Earth in all its dynamicity and complexity and all it has to offer. While the mysteries of space, quantum physics and even life itself continue to tantalise, much of our home planet remains unknown. In this issue’s FOCUS, Bryony Yates, Hannah Bryant and Andrea Chlebikova explore how the Earth serves as a natural, living research facility for studying variation in ecological spaces, conducting extreme thermodynamic experiments and probing the climate of the deep past. As it turns out, Earth continues to hold many surprises. This issue also features a series of articles surveying Earth from top to bottom. Atmospheric chemist James Weber invites us to consider the capricious feedback loops at play in the atmosphere that are constantly changing our climate in unexpected ways. James Kershaw explains how changes in the hydrosphere could have led to the fall of an ancient civilisation. Ben Johnson explores how devastating earthquakes such as the one that struck Sulawesi, Indonesia, last year originate from movements in the lithosphere, destroying life, while Kasparas Vasiliauskas looks at how soils, an often-forgotten part of many ecosystems, support it. Finally, Gareth Hart examines a new theory on how our planet’s core could have formed during the days of the early solar system. Researchers often take inspiration from nature to solve some of the world’s most pressing research problems. In this issue, Charles Jameson gives us a retrospective on the recent rise of neural networks. Sophie Cook explains how our exploding population has fundamentally changed the global food supply and how artificial meat may be the key to the planet’s survival. Maeve Madigan paints a fascinating story of how particle physicists are taking the ideas of our FOCUS to the next level, by literally converting large ice sheets on Earth into their own laboratory, creating sophisticated neutrino detectors out of natural materials. Riding on the theme of particle physics, Mrittunjoy Majumdar gives us a tour of pilot-wave theory, and how bouncing oil particles could govern the laws of quantum physics as we know it. As Lent Term marches on, I hope this issue of BlueSci may inspire you – may we remember that there are still discoveries waiting to be made in the everyday, especially on this dynamic planet we call home

Silas Yeem Kai Ean Issue 44 Editor

Lent 2019


On the Cover

Inspired by this issue’s theme, ‘Our Dynamic Planet’, cover artist Hayley Hardstaff presents a vibrant depiction of the Earth as the barycentre of its constituent subsystems: the biosphere, atmosphere, lithosphere, anthroposphere and hydrosphere. This image is purposefully reminiscent of how the planets orbit the sun within our solar system and aims to raise the question of “how far can we extend our knowledge of our Earth’s spheres to other planetary spheres?”. Do note the microscope, thermometer, tape measure and scales that adorn the central Earth illustration. These instruments allude to the subject matter of the FOCUS – namely that we can use the Earth as a natural, living laboratory to provide insight into extreme conditions which cannot be artificially attained or replicated Serene Dhawan Art Editor

Lent 2019

On the Cover

3


News

Live bacteria pills, a possibility

Shining New Light on Submarine Quakes

Taking a pill full of live bacteria does not sound like a medicine that your doctor would, or could, prescribe - but it might soon be. In the last decade, gut microbiome research has drawn a lot of attention and funding, but it was not clear whether any therapies would come out of it any time soon. Synlogic, Inc in Cambridge, MA developed a live bacterial therapy to treat phenylketonuria, an inherited genetic disease that affects metabolism. Those who are affected cannot metabolise phenylalanine - one of the essential amino acids that cannot be made in the body and have to come from food. As a result, unmetabolised phenylalanine accumulates in different organs, including the brain, where it causes neurological defects and causes emotional and cognitive problems. This therapy promises to end the suffering in a reasonably straightforward way. The bacterial strain is genetically engineered, and it is only activated in the anaerobic environment of the gut because they thrive in the absence of oxygen. A paper in Nature Biotechnology, published this October, reports this treatment successfully tried in mice and monkeys, paving the way for clinical trials. This is by far not the only company in this field. Microbiotica, a spin-out from Sanger Institute in Cambridge, UK had recently signed a deal with Genentech and in the US Vedanta is collaborating with J&J Janssen to develop live bacterial therapies to treat inflammatory bowel disease. At the moment, apart from Seres Health with their treatment for Clostridium dificile infection in Phase III of clinical trials, other live bacterial therapies are at early stages of development but who knows where they are going to be in a few years. At the pharmacy for all one knows? nm

Three years ago, NASA’s New Horizons probe made its famous flyby of the dwarf planet Pluto. On New Year’s day 2019, the probe made history again, in its flyby of Ultima Thule. This is the farthest away object humanity has ever visited in the Solar System, 6.5 billion km away from the Earth. Flying as ‘close’ to the object as 3,500km, New Horizons took a series of stunning photographs and collected a variety of scientific data. At this distance, the radio signals took over six hours to return to Earth. On seeing the images, New Horizons principal investigator Alan Stern described Ultima Thule’s shape as a ‘snowman’. Significant observations recovered include that the body has much fewer craters than an object of its size would in the inner solar system. This might mean the undeformed surface would preserve evidence of the objects original accretion during solar system formation 4.6 Billion years ago. Also notable is the objects red colour, which is suggestive of organic compounds. The total downlink of data collected from the flyby is expected to last 20 months, through to September 2020 ep Check out www.bluesci.co.uk, our Facebook page or @BlueSci on Twitter for regular science news and updates

Walking in the footsteps of robotic fossils Ripples are forming at the surface of a glass of water; scared children huddle at the back of a car; ominous footsteps resonate on the not-so-distantanymore horizon. And suddenly, it appears: swinging its tail side-to-side, little arms tucked on the side, a T-rex makes its way into the frame, walking its characteristic walk. But how do we know which gait the now (thankfully) extinct giant adopted? Fossils can inform us on the position of limbs and the range of motion of articulations, and comparisons with extant species can give us hints about gait, but it is difficult to know how ancient species moved. Until now. In a new study published in Nature, researchers harnessed x-ray video, computer modelling and robotics to bring life to Orobates 4

News

pabsti, a 280-million-year fossil that walked the Earth before any amniotes (creatures such as birds, reptiles and mammals) ever did. Based on well-preserved skeletons and studies of living reptiles, models were established of how this creature used to roam on land. Then an actual robot of Orobates pabsti was created: what model would make it walk into the (fossilised) footsteps of its ancestor? Different models were tested, and the ones that fit suggest that the creature had much more of an upright gait than expected for such an early land dweller. This may transform our understanding of how and when advanced styles of locomotion came to be el

Lent 2019


Reviews Adventures in the Anthropocene: A Journey to the Heart of the Planet we Made - Gaia Vince

Chatto & Windus 2014

The term Anthropocene is used to reflect the age of human influence on our planet. A read of Gaia Vince’s Adventures in the Anthropocene is an unquestionable must for every one of us. In ten chapters, each representing a different ecosystem, Vince reflects on a monumental journey to understand what the Anthropocene really means for our Earth and its residents: Does any part of our dynamic Earth still elude human impact? What are the consequences for those living on the frontline? Vince meets with people from around the globe, local villager to national president; each of them are facing their own unique challenges and devising their own solutions with equal ingenuity and determination. Vince’s accounts of the meetings are so detailed and intimate that you can easily feel as though you were there in person. More, she is able to weave together multiple stories amongst factual context, considering restrictions, oppositions and limitations with refreshing fairness. In recognition of this thought-provoking book the already well-accomplished science journalist was deservedly awarded the 2015 Royal Society Winton Prize for Science Books. For many of us, the effects of the Anthropocene may feel far off, but as human activity continues unabated, this book is an important journey we all should take hf

“For many of us, the effects of the Anthropocene may feel far off, but as human activity continues unabated, this book is an important journey we all should take“

Chemistry - Weike Wang There is more to scientific research than the work itself. Weike Wang’s novel, Chemistry, raises questions about the tenants of science that ignore personal lives. Under various pressures the American-Chinese narrator quits her chemistry PhD and struggles with her long-term relationship. Charting her coming to terms with these losses, the story is both upbeat and cautionary. The scientific setting is not overdone; it could be almost any science PhD in the narrator’s position. Hints of an autobiographical nature make the story all the more poignant. Weike Wang is herself an American-Chinese immigrant, having completed an undergraduate degree in chemistry before succeeding in her public health PhD. Biographies of now famous scientists are rife; this light-hearted parable of someone unsuccessfully starting their research career is both refreshing and thoughtprovoking. The novel should hold the attention of anyone considering a life in science, undergraduate to PhD level. For others it may either be an insight into or memory of early years of laboratory work mh

Knopf 2017

“Under various pressures the American-Chinese narrator quits her chemistry PhD and struggles with her long-term relationship...”

To Be a Machine - Mark O’Connell

Granta Publications 2017

Visiting laboratories and conferences, bunkers and basements, O’Connell meets foremost scientists, programmers, philanthropists and entrepreneurs, who are ahead of their time and have dedicated their lives to transforming humanity with technological enhancement. Akin a traveller’s diary, this book describes unbelievable technologies of tomorrow, such as mind uploading, cryonics, artificial superintelligence and device implantation. It also unearths ethical conundrums that might come to light if any of these technologies overtake the society. The author’s intimate reflections, such as him imagining his mind outside of his body, make the book even more engaging. While investigating what it might be like to be a machine, this book may have uncovered more about what it is like to be a human nm

“Akin a traveller’s diary, this book describes unbelievable technologies of tomorrow, such as mind uploading, cryonics, artificial superintelligence and device implantation”

The Violinist’s Thumb - Sam Kean

Black Swan 2013 Lent 2019

Sam Kean matches his bestselling predecessor - “The Disappearing Spoon” in the vibrancy of storytelling and may even supersede it. The book proves once again that Sam Kean can explain difficult concepts in a simple but not simplistic way, and to make science entertaining for any audience. Exploring the wonders of DNA, this book answers perplexing questions on heredity and tells how DNA shaped the past, is shaping the present and will shape the future. The book starts by examining the battle of ideas marking the early days of genetics, and ends with a discussion of modern attempts to harness the power of genes with cloning and gene editing. Every word and sentence contribute to the engaging narrative - the book is just impossible to put down! nm

“Every word and sentence contribute to the engaging narrative - the book is just impossible to put down!”

Reviews

5


The Future of Earth is Up in the Air James Weber explains the role of positive feedback loops and how they could lead to runaway environmental disaster Our atmosphere can be thought of as a single, highly complex system. The complexity arises in part due to the coupling of a vast array of different elements, such as temperature, wind speed, and chemical composition. Should one element be disturbed, others will also change, and this perturbation will propagate throughout the atmosphere, much like plucking a single strand of a spider’s web and witnessing its other strands vibrating across the structure. Feedback loops are a simple extension of this idea, whereby the vibrations of other strands in the web accentuate or dampen the vibration of the strand originally disturbed. Not only does the phenomenon of feedbacks play a vital role in the atmosphere: it is pervasive throughout all disciplines in the sciences and everyday life. Take the unpleasant high pitched noise that sometimes occurs while using a microphone. This occurs when a microphone gets too close to its amplifier; the microphone relays background noise to the amplifier, whereby it is emitted at a greater volume. This enhanced volume is received by the microphone and again relayed to the amplifier for an even louder result. As the process is repeated, the sound quickly becomes unbearable, until the microphone is rapidly pulled away. This is a feedback loop, in which one variable (in this case, the volume of sound detected by a microphone) affects a second variable (the volume of sound emitted by the amplifier) which in turn affects the first variable. Since the second variable in this case increases the first variable, the microphone-amplifier scenario is classified as a positive feedback loop. Positive feedbacks are often described as a ‘vicious cycle’. In contrast, negative feedbacks are those in which the change in the second variable opposes the original change in the first variable.

Crucially, while positive feedbacks can lead to rapid changes away from a system’s original state, as in the case of the microphone, negative feedbacks act as a restoring force, working to push systems back towards their initial state. Upsetting this delicate balance can have severe consequences, and in the case of the atmosphere, these occur on a global scale. A Sea of Troubles | The decline of the Arctic ice pack has been well documented, with some estimates predicting Arctic summer sea ice will be non-existent by mid-century. While the ocean only reflects 6% of energy from the sun, sea ice reflects between 50-70%, greatly reducing the amount of energy absorbed by the planet—and the resultant temperature rise. Unfortunately, this protection is crumbling thanks to human activities, such as the overuse of fossil fuels in industry: increased atmospheric carbon dioxide (CO2) has raised temperatures in the Arctic, causing sea ice to be replaced by open water. As a result, an even greater fraction of solar energy is absorbed by the planet, leading to further warming and ice loss in a positive feedback loop. Even more alarming is the potentially rapid release of large quantities of methane gas trapped in ice crystals, called hydrates, that lie deep in the sea floor. As the planet warms, this methane will escape, bubble through the ocean and enter the atmosphere. Once in the atmosphere, methane acts as a greenhouse gas—akin to CO2—by absorbing energy emitted by the planet and re-radiating a portion back to Earth’s surface, causing further warming in another positive feedback loop. As a warming gas, methane is around 100 times more powerful than CO2, such that a large release of this gas could potentially lead to runaway climatic warming.

less ice reflection more sea absorption

ice melt

6

The Future of Earth is Up in the Air

rising temperatures


Muddying the Waters | Water, like carbon dioxide, can act as a powerful greenhouse gas, increasing global temperatures in a positive feedback loop. The condensation which forms on a bathroom mirror after a shower arises because warm air can hold more water vapour than cold air. When warm air from the shower is cooled by the mirror’s surface, liquid water precipitates. The same principle applies in the atmosphere: as surface temperatures rise, the quantity of water vapour in the air increases. A 2°C rise in temperature would lead to the lower atmosphere holding about 10% more water vapour. This would result in a multitude of effects—though in an apparent paradox, not all would be harmful. Unlike CO2, which is a relatively unreactive greenhouse gas (and therefore difficult to remove from the atmosphere), water can react with high-energy oxygen atoms (which arise from interactions with solar rays) to produce hydroxyl radicals. This chemical destroys the potent greenhouse gas methane, which would otherwise heat the atmosphere. In this negative feedback loop, as the concentration of water vapour increases, the quantity of atmospheric methane should decline. The hydroxyl radical also can react with organic compounds such as isoprene, which is emitted by plants to alleviate heat stress and so is expected to increase with rising surface temperatures. The downstream effect of isoprene emissions is the formation of aerosols, small drops of liquid-like material suspended in the air. These aerosols themselves can reflect solar radiation and can aid cloud formation, increasing cloud coverage. Clouds, aside from facilitating precipitation, reflect a large fraction of energy from the sun, cooling the Earth. Such negative feedback is a topic of active research. “It is still uncertain how the combination of droughts, heat stress and increased CO2 will affect plant emissions, and this could have important implications for future climate,” said University of Cambridge Lecturer in Atmospheric Chemistry, Dr. Alex Archibald.

An Earth That is Under the Weather | Though the aforementioned feedback loops represent just a fraction of those at play in the atmosphere, they illustrate the complexity of a system that humankind is swiftly changing. Over the duration of history that humans have been able to study, the global climate has varied greatly, but the causes have been predominantly external (e.g. small variations in Earth’s orbit of the sun) and over a vast timescale (approximately 10,000 to 100,000 years). However, due to human activities, the atmosphere is now changing faster than at any previous time for at least the last 800,000 years. The negative feedback loops in the atmosphere might work to cool a warming planet, but it is highly unlikely that they will be powerful or fast enough to counteract significant humandriven perturbations, particularly the 45% rise in atmospheric CO2 in the last 160 years. Indeed, it is suggested by scientists from the Stockholm Resilience Centre that we could soon pass a point of no return, in which the positive feedback loops—such as the release of methane from hydrates—can no longer be contained. Should Earth reach this so-called ‘hothouse’ phase— marked by the highest global temperatures in the last 1.2 million years—there would be significant disruption to economies, ecosystems and society as a whole. Whether or not such an outcome is near, the study of feedback loops and their incorporation into climate models is vital as it allows scientists to predict with increasing certainty the effects of humankind’s actions on our planet. Such studies are vital in providing politicians with ever stronger evidence that action must be taken to prevent catastrophic climate changes

water vapour

rising temperatures

evaporation

James Weber is an atmospheric chemistry PhD student at Pembroke College. Artwork by Alexander Bates, @as_bates

The Future of Earth is Up in the Air

7


Drought and the Collapse of the Maya James Kershaw discusses whether new data is raining on, or could prove, this fashionable hypothesis Water is essential for life. In this article, we explore the palaeoclimatological evidence linking societal change to periods of drought, with a specific focus on the Maya civilisation. It has been the subject of recent sensationalist news articles, so we ask whether science can conclusively confirm how the great society collapsed. “We definitely consider ourselves palaeoclimatologists,” Nick Evans tells me, when asked his opinion on the matter. A former PhD student in the Department of Earth Sciences, Nick worked extensively on the reconstruction of palaeoclimatic conditions and their relationship with the collapse of the Maya. He is keen to emphasise that his work is not a detailed archaeological study, nor is it sufficient to claim that drought caused the collapse. Existing across South America from 2000 BCE until 1539 CE, the Maya civilisation is famed for its art, architecture and hieroglyphic scripts. From the highlands of Sierra Madre to the lowlands of Mexico’s Yucatán Peninsula, Maya cities seemingly flourished for centuries. The demise of the Lowland Classical Maya Civilisation during the Terminal Classic Period (800-1000 CE) is thus of great interest to archaeologists, with warfare and strain on resources often implicated in the collapse. In 1995, however, scientists studying the past climate of our planet weighed in on the debate, suggesting in fact that the regional climate had a role to play. Cambridge professor David Hodell - Evans’ supervisor - carried out the initial work in this area, publishing data from Lake Chichancanab in Yucatán Peninsula. These data consist of the oxygen isotope composition of shells from lake organisms and imply a climatic change synchronous with the collapse of the Maya. These organisms make their shells from calcium carbonate which is precipitated directly from the lake water. In theory, their composition reflects to some degree that of the water in which they resided . This water is in turn influenced by the prevailing climatic state, making these shells powerful indicators of past climate. The ratio of a heavier oxygen isotope, 18O, to the more abundant and lighter 16O, is thought to depend on the amount of precipitation occurring across a region. The heavier isotope of oxygen falls as rain more quickly than its lighter counterpart, meaning that as clouds and storms travel along their path, the rain falling contains an ever-increasing percentage of the lighter oxygen isotope. This so-called ‘amount effect’ dictates that during times of drought, the oxygen isotopic composition of the water is heavier than at times when more rain falls. This arises because more of the isotopically lighter water can remain locked in clouds and storm fronts when less rain falls, maximising the effects of the fractionation. During times of peak rainfall, however, this lighter water also falls as rain, shifting the observed oxygen isotope composition to lighter values. Prof. Hodell’s 1995 paper found a peak in 18O values during the Terminal Classic period, and thus inferred drought coincident with the collapse of the Maya.

8

Drought and the Collapse of the Maya

Further work on sediments and speleothems (stalagmites and stalactites) has strengthened the weight of evidence in favour of drought conditions correlating with the collapse of the Maya. Despite this, Evans tells me, “to date, no one has been able to quantify the drought”. The problem, he says, is that the double oxygen isotope method used previously responds to multiple parameters in addition to precipitation, such as humidity and atmospheric conditions. As a result, answers to key questions such as ‘How big was the drought?’ and ‘By how much did precipitation rate decrease?’ have remained elusive. In a 2018 paper, Evans and colleagues were able to combine traditional oxygen isotope measurements with a third isotope, 17 O, and two isotopes of hydrogen. As each of these isotopes display different responses to the competing environmental parameters, a clearer picture of precipitation levels can be formed. Such simultaneous isotopic measurements cannot be made on calcitic material, but a convenient alternative is gypsum, a hydrated mineral of calcium sulphate. Precipitating from lake waters during times of high evaporation rates, gypsum traps water molecules in its structure, meaning scientists can release and measure the isotopic composition of this water. In theory, this water reflects the isotopic composition of the lake at the time of gypsum formation and thus records the prevalent climatic conditions at the time. This innovative approach has allowed for the first quantification of conditions of humidity and precipitation during the presumptive drought. By formulating a complex model of lake dynamics, Evans and colleagues were able to convert the measured isotope measurements of gypsum into estimates of climatic conditions. The model outputs suggest a reduction in average precipitation rate of 4154% and a decrease in humidity of 2-7% during the Terminal Classic. Furthermore, it suggests the drought was multidecadal, representing a prolonged period of decreased precipitation rate. This quantification of the Yucatán drought during the Terminal Classic represents a substantial improvement on previous data and revived tabloid sensationalism regarding the collapse of the Maya civilization. However, despite answering the question of the scale of the drought, Evans is under no illusions regarding the limits of palaeoclimatic data. It alone cannot address the question of how much impact the drought would have had on the Maya civilisation. It tells us nothing of how the drought would have impacted crop yields or of how the Maya people might have been able to adapt their infrastructure to suit the drier landscape. Rather, the

Lent 2019


data provide a potentially powerful tool for helping us answer these questions. Evans proposes that in the future, progress might be made by producing computer models of societally relevant parameters, such as crop yields, and forcing these models with his climatic data. The outcomes of palaeoclimatic studies alone are insufficient to conclude whether drought contributed to the collapse of the Maya civilisation. Furthermore it is unclear even in the modern day exactly how climate and society interact. A major drought affected the Levant region of Africa from 2006 to 2011, with the associated displacement of thousands of Syrians from rural communities into urban areas in an attempt to access secure food, water and economic resources. Commentators regularly cite the subsequent stress on infrastructure and social relationships as a major contributor to the anti-government protests in 2011 and the outbreak of civil war. If drought did indeed cause large societal changes in Syria, then it did so through a complex web of interactions. Evans summarises nicely when he says that there is rarely a one-to-one correlation between climate and society. Despite this complexity, the new data contributes to growing evidence that societies do not respond well to changes in patterns of rainfall. If drought can be linked to war, human suffering and large-scale societal collapse, then it represents a key

Lent 2019

vector in the discussion on climate change. These problems are intensely personal: while pictures of a collapsing ice shelf might invoke mild indignation, the very real prospect of a struggle for fresh water supplies is a much harder image to shake and is a powerful tool for climate action groups. Nonetheless, the scientific community has a responsibility to draw attention to the limits of its data and to communicate its findings responsibly: it is premature to say from palaeoclimatic evidence alone that drought caused the collapse of the Maya James Kershaw is a 4th year Earth Scientist at Emmanuel college. Artwork by Alexander Bates, @ as_bates

The Maya civilization first emerged roughly 3,000 years ago, and reachedits zenith at 250-900 A.D. In 2018 new, technology called LiDAR (light detection and ranging) uncovered thousands of previously unknown Maya structures, detecting them beneath smothering vegetation

Drought and the Collapse of the Maya

9


How the Antarctic is used as a Neutrino Detector Maeve Madigan discusses how and why we can leverage Antarctic ice to find some of the most elusive particles in the known Universe The questions asked by physicists today require us to build larger and more precise experiments than were previously thought possible. Just take a look at the Laser Interferometer Gravitational-Wave Observatory and the Large Hadron Collider. We are entering an era where our ability to investigate further will be limited by the cost and expertise required to build the necessary equipment. But what if we could avoid building these detectors from scratch? In the pursuit of understanding mysterious particles called neutrinos, physicists at the South Pole have made use of the Antarctic ice sheets as a core part of their experiments. By using the Earth itself as a laboratory, the IceCube and ANITA collaborations have been able to probe some of the most elusive challenges in particle physics and cosmology today. The standard model of particle physics provides us with a recipe for the universe. It gives us the ingredients, fundamental particles like the electron and the Higgs boson, and it tells us how to combine them through interactions such as the electromagnetic force. While the properties of electrons have been known for decades and the Higgs boson has been in the public eye since its observation in 2012, there are still particles in the standard model that manage to hold an air of mystery. These are the neutrinos. The 2015 Nobel Prize in physics was awarded for the discovery that proved that some types of neutrinos must have a small, but nonzero, mass. Prior to this they were thought to be completely massless, and even now their masses have not been accurately measured. So, what is it about the neutrinos that makes their properties so difficult to determine? Unlike most other particles in the standard model, neutrinos do not carry electric charge. This means that if you were to shine a light on a bunch of neutrinos the light would pass straight through, as if they weren’t there.

Usually we detect particles by looking at how they interact with other particles. However, of all the forces in the standard model, neutrinos can only interact through the weak force. The weak force allows neutrinos to interact with subatomic particles like protons and neutrons, producing electrons which can then be detected. However, as the name suggests, these interactions are weak: they rarely occur, and create only a faint signal for particle physicists to measure. It does not take much inspection to notice that the standard model is incomplete. The force of gravity, for example, is completely absent from the recipe. Finding a way to extend the current theory is a challenge. Determining what lies beyond the standard model is one of the most important goals pursued by physicists today. Because we know so little about neutrinos, there is plenty of scope for incorporating them into new theories. The question of why neutrino masses are so tiny has led to interesting new theories postulating the existence of additional heavier neutrinos, called ‘sterile’ neutrinos. Sterile neutrinos have even been suggested as candidates for dark matter, the invisible matter thought to constitute over 20% of the universe. By studying neutrinos and determining their properties, physicists can explore and improve these possible theories. It is not only particle physicists that are interested in neutrino detection: it is also a valuable tool to astrophysicists. Astrophysicists study cosmic rays, streams of extremely high energy protons and other subatomic particles, and try to determine their sources. However, protons are easily scattered and deflected by magnetic fields in their path. When a cosmic ray signal is detected, tracing the ray back to its origin is extremely challenging because we cannot assume it has travelled its whole journey in a straight line. Neutrinos avoid this problem. They are much less likely to be thrown off

ANITA consists of detectors held afloat over 35km above the Earth’s surface by a heliumfilled balloon. High energy neutrinos pass through the Earth’s atmosphere and interact with the ice, producing Askaryan radiation

10

How the Antarctic is used as a Neutrino Detector

Lent 2018


course because they rarely interact with other particles, and so they can travel long distances without being disturbed. This means they can provide an important mechanism for probing the sources of high energy cosmic rays in the distant universe. If neutrinos are capable of travelling these long distances undisturbed, how can scientists stand a chance of finding them? Luckily, their interactions with water and ice provide recognisable signatures. You might have heard that nothing can travel faster than the speed of light. This is true in a vacuum, but when light travels through a medium such as ice, its interactions with other particles slow it down to a fraction of the vacuum speed. A neutrino, however, is not slowed down, and this means that it may actually move faster than light. Like the sonic boom for sound waves, this leads to characteristic forms of radiation called Cherenkov and Askaryan radiation. By analysing these, detectors are capable of reconstructing the neutrino’s speed and direction of motion. Because of how rare these events are, the experimental setup needs to be large. The bigger the detector, the more likely an interaction is to occur and the better our chances of seeing it. On top of this, experiments need to be as isolated as possible because such a weak signal is hard to distinguish from background noise. Some laboratories tackle these challenges with huge man-made water tanks, such as Super Kamiokande in Japan, or by building the experiments in underground mines, such as SNOLAB in Canada. Others have made clever use of the isolation and abundance of ice at the Earth’s South Pole. Antarctica is home to two neutrino experiments: the IceCube Neutrino Observatory and the Antarctic Impulsive Transient Antenna (ANITA). The IceCube detector lives up to its name: it encompasses a cubic kilometre of Antarctic ice, throughout which 5,160 Cherenkov radiation sensors are distributed. A high energy neutrino arriving at the Earth can interact with the ice to produce Cherenkov radiation which then travels through the ice and is detected by the sensors. The distance radiation travels through this ice depends on the ice’s purity: the more pure and transparent the ice, the further radiation will go. Not only does Antarctica provide a large quantity of ice: its ice is some of the purest naturally occurring ice in the world. This is good news for IceCube: Cherenkov radiation travelling a long distance through the detector will pass through many sensors compared to radiation travelling short distances. This allows for more data to be collected, and better measurements to be made. The ANITA experiment makes use of the Antarctic ice sheets in a slightly different way. Rather than embedding sensors in the ice, it consists of detectors held afloat over

35km above the Earth’s surface by a helium-filled balloon. High energy neutrinos pass through the Earth’s atmosphere and interact with the ice, producing Askaryan radiation, which is then measured by the ANITA detectors above. To maximise the amount of useful data that can be taken during each run, the launches are scheduled to take advantage of the Polar Vortex, an area of low pressure near the South Pole. By launching when the Polar Vortex is strong, the ANITA detectors are transported around the sky above the Eastern Ice Sheet, the largest ice sheet on Earth. This is where the ice is smoothest and the path of radiation can be easily reconstructed. Both IceCube and ANITA have already succeeded in creating excitement in the world of physics. In 2017, IceCube detected a high energy neutrino which was traced back to an origin 3.7 million light years away. This was the first time the origin of such a high energy neutrino was localized in this way, providing a new insight into distant sources of cosmic rays. In 2018, ANITA announced that it had detected something unusual: signals from very high energy neutrinos that had travelled upwards through the earth. The likelihood of a neutrino with such a high energy passing through the earth is small, and so this measurement suggests the possibility that the neutrino may have been produced by some mysterious new particle. Whether this really is a sign of new physics is yet to be confirmed. Physicists wait in anticipation of further analysis and measurements. From the Polar Vortex to the purity of the Antarctic ice sheets, the conditions at the South Pole are ideal for neutrino experiments. It is almost as though Antarctica was designed for neutrino detection. By using the Earth’s resources as part of their detectors, IceCube and ANITA have been able to make unprecedented measurements, and continue to shine light on some of the most pressing issues in physics and cosmology today The IceCube detector encompasses a cubic kilometre of Antarctic ice, throughout which 5,160 Cherenkov radiation sensors are distributed. Cherenkov radiation is light produced by charged particles when they pass through an optically transparent medium at speeds greater than the speed of light in that medium

Maeve Madigan is an astrophysics PhD student at St John's College. Artwork by Alexander Bates, @as_bates

Lent 2019

How the Antarctic is used as a Neutrino Detector

11


Let's Talk About Soil Kasparas Vasiliauskas looks under our feet at some of the Earth's most overlooked material

M et in almost every step we take, soils, despite being so familiar, are often overlooked in discussions of natural systems. This is evident, for example, in making climate models and predictions and even more so when thinking about humanity’s future outside Earth. The entirety of soils and the space where their formation takes place is called the pedosphere. It is a dynamic interface and junction point for many other systems: the lithosphere - the top 100 km of the Earth’s rocks - erodes with the help of atmospheric, hydrological and biological processes to form this complex mixture of solids (loose rock particles and sediment, animal and plant debris), liquids (mostly aqueous - water based - solutions) and gases (both in pores and dissolved). To really understand what soil is, we need to look at the roles it plays in global processes. While there are formal definitions and descriptions, it is often the case that function precedes the name of soil. In overview, it is: • • • •

It is estimated that there are more microorganisms in a handful of top soil than there are people on earth

12

a modifier of Earth's atmosphere - it stores and cycles carbon, making it important in understanding climate change a medium for plant growth – currently a fundamental part of food production a means of water storage, supply and purification a diverse ecological host

Soil is not a mere tool or catalyst for the processes above, it is central to them. Soil in turn is modified by all of them and keeps evolving with time. While not all soils have all the mentioned functions, the common denominator is hosting and supporting life. It is now evident that biology is not only affected by, but also influences atmosphere and in turn climate. A lot of that effect comes from feedback loops in the global carbon cycle, where the role of soils has only recently been realised. There the carbon is stored in a few forms - organically as living biomaterial or undecomposed remains of organisms and inorganically in minerals, carbon dioxide, methane and their hydrates (complexes with water). The majority of soils can be assumed to be more or less in equilibrium with

Mapping the Subsurface

the atmosphere, at least with respect to carbon. But what about permafrost? Permafrost is the soil, rock or sediment remaining at or below the freezing point of water for at least two years. With global temperatures rising, high latitude countries such as Canada, Iceland, Greenland and much of Russia are already seeing thawing of the permafrost and more soils being uncovered by retreating ice. Recent studies estimate that in northern permafrost soil carbon content equals more than double the amount currently in the atmosphere. Horrified by the predicted rates by which climate change could occur if a lot of this carbon leaves the soil suddenly (in a few tens of years), the scientific community are researching the consequences of soils getting warmer. While similar, even if slower, climatic changes have occurred in the past, the geological record associated with past warmings does not show a soilcarbon signal. This suggests that retreating permafrost and warming soils do not just dump the stored carbon to the atmosphere. A recent study by Dr Robert Sparkes from Manchester Metropolitan University [3], looking at terrestrial permafrost sediments being redeposited to the Arctic Ocean, found that at least 80% of the carbon is either reused or redeposited. While that is a bit of a relief, more context is needed - Siberia and Canada have mature and old soils, but geologically young Iceland has immature andesols (volcanic ash and recently eroded volcanic rocks) and sands. To an extent, even this substrate - material in which growth takes place - fulfils the aforementioned functions, however, some would hesitate in calling Iceland’s cover a soil. In truth, it does behave a bit differently with changing temperatures to what we see in Siberia's permafrost, as has been found by Dr Utra Mankasingh and other scientists at the University of Iceland, by looking at released carbon in carbon dioxide and methane forms. While the warmer the soil, the more carbon - especially methane - is released, biological activity also increases, more carbon is then cycled, and a positive feedback loop is therefore partly prevented. Including such data and processes in climate modelling permits not only making more accurate predictions, but also allows us to better control the soil aspect of global changes. One such change concerning everyone on the planet Lent 2019


is in land that has been or can be used for growing food. In this case, while soil-stored carbon is also used, plants take-up atmospheric carbon dioxide. A key limiting factor for the growth and health of crops are the nutrient levels of the substrate. Current ‘quick-fix’ approaches, such as utilising extremely fertile Amazon soil, have clear negative effects on biodiversity, ecology and climate. Everywhere, natural nutrient resources are exhausted in just a few subsequent harvests, depending on the soil, climate and crop. This has mostly been overcome with mineral fertilisers. However, their continued use, especially together with commonly used pesticides, jeopardise other roles soils play in nature, especially water purification and supply. Thus, to feed and ensure health for the growing multibillion population in the long term, the process has to become fully sustainable and economically viable. Using fertilizers and huge funds, even sandy and arid Arabian Peninsula lands were turned into a productive food source, despite irrigational as well as nutritional challenge. Such a success story is hopeful not just for us here on Earth, but also for future generations exploring more of the solar system. The Moon and Mars are the two likely next extra-terrestrial human habitats, but Mars is expected to both be able, and needed, to support larger colonies in the future. Therefore, rather than bringing food or soil, we need to understand how materials that are there already could be used to produce sufficient fertile soil cover. In truth, ‘Martian soil’ exists already, mostly in form of dust and physically weathered minerals such as olivine. Chemical weathering - on Earth mostly conducted by water - hence is required to break these minerals, liberate nutrients and neutralise toxic compounds, such as possibly present perchlorate salts, before any farming can take place. As biology is important in soil development, introducing some bacterial life might be a viable option in the case of Mars. This could help us in creation of an actual Martian soil

It takes 500 years to produce just under an inch of topsoil, this is the most productive layer of soil

Kasparas Vasiliauskas is a fourth year Earth Scientist at Churchill College. Artwork by Alexander Bates, modification of an image by Thomas Hawke

Lent 2019

Mapping the Subsurface

13


A Digital (R)Evolution Charles Jameson examines neuroscience’s role in solving the most difficult computational problems In May 1997, IBM’s computer program ‘Deep Blue’ infamously defeated chess world champion Garry Kasparov in a set of six highly anticipated games. In a curious case of repeated history, DeepMind’s program ‘AlphaGo’ did the same for the ancient Chinese board game of ‘Go’ 19 years later, beating 18-time world champion Lee Sedol 4–1 in March 2016. While beating another board game’s top player after two decades hardly seems like significant progress, the underlying differences between ‘Deep Blue’ and ‘AlphaGo’ reveal a profound paradigm shift in recent computing. AlphaGo is proof that neuroscience has become as vital as 14

A Digital (R)Evolution

electronics and mathematics in computer science for solving many modern problems. AlphaGo is an example of a ‘neural network’ program. In the brain, neurons are cells which take in, process, and transmit electrical signals. The chemical processes inside these neurons and the links between them define how humans think, feel and move. This led to a bold new idea - what if computers could model these neurons? In 1943, neuroscientist Warren McCulloch and mathematician Walter Pitts proposed a way to simulate the brain’s neurons. They represented the brain’s chemical processes as mathematical functions, and electrical signals between neurons Lent 2019


as on-off signals that were passed from function to function. By manually writing these functions and linking them together, McCulloch and Pitts created the first artificial neural networks. However, something was missing from their model—learning. What makes us most human—the ability to learn—stems from the malleability of neurons. Their model did not consider that neural connections can strengthen, diminish or disappear entirely as the brain accumulates experience, and learns what is right and wrong. In 1957, Rosenblatt was the first to fill this gap by allowing the artificial neural connections to adapt over time. By enabling more varied calculations inside neurons, connections between neurons also became more mathematically complex and far more powerful. With these changes, Rosenblatt successfully simulated a neural network that could identify a triangle in a 20x20 image. Rosenblatt had remarkable ambitions for his newly-dubbed ‘perceptrons’. He declared that his perceptrons could be “fired to the planets as mechanical space explorers” in the near future. While this certainly captured the public’s imagination, he soon faced the reality that computers of his time simply weren’t powerful enough and his model was not thorough enough to properly model the human brain. The relationship between neuroscience and computer science began to stagnate after Rosenblatt’s discoveries. More focus was placed on sheer computing power, and with good reason—the aspirations of neuroscientists simply could not be met by computers of the day. Mathematics took the lead in advancing research into neural networks: refining the layout of neurons; optimising computations within the neurons; and developing new techniques to imitate the brain’s ability to learn. For many decades, there ironically wasn’t room for neuroscience in neural networks. However, these researchers’ efforts are now paying off in the modern era. Computing power has increased more than a billion-fold since Rosenblatt’s era, and new chips, dubbed ‘Tensor Processing Units’ (TPUs) are now being specifically designed to improve neural networks. These advancements have given this research of the past new life and caused a surge in neural network performance, like in the ‘ImageNet Large Scale Visual Recognition Competition’ (ILSVRC), a competition testing neural networks’ ability to identify animals and objects in a set of thousands of images, where the accuracy of the winning algorithm has increased from roughly 71.8% in 2010 to a staggering 99.98% in 2017. Many of the problems that we face in computer science today are fundamentally human problems. Image recognition, speech recognition and driving, for example, are all very human tasks, and we are aiming to create a system which can match, or even exceed, our own abilities. It should therefore be of no surprise that neural networks are already at the forefront for what is possible, as they are most closely linked to how we approach these problems ourselves. Nevertheless, there are still many problems that neural networks cannot solve. For instance, today’s image recognition neural networks are plagued by issues with ‘adversarial tests’, which use images specifically designed to confuse the model. Rosenfeld, Zemel and Tsotsos showed this in an August 2018 study by inserting an elephant drawing into an image of a man in his house. Depending on the position of the elephant, the neural network may have ignored the elephant, misinterpreted the elephant as a chair, or even have become confused about other objects in the image which were correctly identified before the elephant was introduced. Lent 2019

Tsotsos, the neuroscientist of the group, explains how this Earlier this year, researchers led demonstrates that large sections of brain function are still by Professor under-utilised in computer science. A human, upon seeing Stephen Long such an image, would first recognise that the elephant is out successfully of place and proceed to examine the elephant and the rest of increased waterthe image separately. In essence, humans have learnt to do a use efficiency in double take in order to understand images more accurately tobacco plants and more efficiently. Even today’s best neural networks by 25% haven’t yet made such logical leaps. Back in the world of board games one question still remains: What sets DeepMind’s neural networks apart from Deep Blue? Deep Blue was the product of hundreds of hours encoding moves, positions and strategies. A team of engineers, computer scientists and chess grandmasters at IBM harnessed computers’ sheer power to calculate the best possible move in any turn. But even computers have their limits, and it was clear that this strategy would not work again for the complexities of Go. DeepMind’s newest offering, ‘AlphaZero’, is much simpler. It is given the rules of Go, and nothing else. It is left to its own (literal) devices, it repeatedly plays itself over and over again, and it learns from every win and loss. Within 21 days, AlphaZero becomes quantifiably superhuman, beating Go world champions consistently. What’s most remarkable about this system, though, is that AlphaZero can be given any ruleset. Given the rules of chess and just nine hours to train itself, AlphaZero is able to beat not just humans, but also the world-leading programs that have dominated the chess world since 1997. The fatal flaw of Deep Blue is that it needs to be taught by someone, and the better the teacher, the better the result. In contrast, AlphaZero is not taught. It is not instructed what to do, and is thus not limited by the abilities of its teachers. Instead of caring about what humans think would work best, it learns everything it needs all by itself. In many ways, you can already see humanity in artificial intelligence systems like AlphaZero. Just like humans, AlphaZero is not designed to solve a single problem. It is designed to absorb information, and to learn from its mistakes just as we learn from our own. And why does AlphaZero seem so human sometimes? It has been made entirely in our image. The processes of trial and error, adaptation and learning are inherent to the neural networks that drive both our brain and AlphaZero. It will be systems like these which will be able to pick up fundamentally human skills like speaking language fluently, recognising objects at a glance, and understanding nuanced facial expressions. It is not clear just how far neuroscience and computer science will take us. Perhaps we will never understand the brain well enough to make a computer completely in its likeness. In any case, neuroscience will continue to inform and inspire the development of neural networks, and will no doubt have a profound impact on the evolution of computer science Charles Jameson is a first year Computer Scientist at Queen’s College. Artwork by Nah Yeon Lee

A Digital (R)Evolution 15


A Bohmian Rhapsody Mrittunjoy Guha Majumdar talks Bohmian mechanics , the 'causal interpretation' of the strange world of quantum mechanics

"Here is a simplistic overview of what Bohmian mechanics entails. The state of a system of N particles is described by its wavefunction, a complex function that varies based on the possible configurations of the system, and its actual configuration, defined by the actual positions of its particles. The behaviour of the system is then described by two evolution equations, the famous Schrödinger’s Equation, which describes how the wavefunction changes with time, and a Guiding Equation, which describes how the position of the particles changes"

16

Fluid droplets bounce e when placed on the surface of a vibrating fluidic bath. A student working at the Matter and Complex Systems Laboratory, National Centre for Scientific Research (CNRS) in France discovered this using oil droplets and an oil bath in 2005. The bouncing of the droplets seemed to be guided by an unseen wave – a guiding, or “pilot”, wave. If that struck you as bizarre, there’s more – what if one were to posit that nature is fundamentally like those bouncing droplets riding waves? What if such pilot-waves could explain the peculiar and fairly counter-intuitive behaviour of particles on the quantum level? In the early twentieth century, it was discovered that the laws of Physics that govern macroscopic objects don’t quite apply to microscopic realms. For instance, in these realms, the act of observing physical phenomena actually influences the phenomena taking place. (Thankfully, this doesn’t occur on macroscopic levels – one would imagine life would be quite strange otherwise.) On the quantum level, waves can also act like particles and particles can act like waves. Matter can also go, or ‘tunnel’, from one spot to another without moving through the intervening space. And if that were not enough, information can move across vast distances instantaneously in what Einstein best described as ‘spooky action at a distance’. Outlandish! For most of the past century, the predominant explanation for why a quantum particle sometimes behaves like a wave is called the `Copenhagen interpretation’, which states that a single particle is like a wave that is smeared out across the universe, and that smear collapses into a certain position only when observed. (To quantum physicists, such a smear is known as a 'superposition'). One could thus say that a quantum object, like a particle, is always is in a superposition of states, until one collapses it into just one of these states. This collapse is found to be depend on the laws of probability. However, what if we could explain this wave-particle behaviour deterministically, such that the result isn’t simply a result of chance? Enter Louis De-Broglie and David Bohm, and their alternative interpretation, known as the 'pilot-wave theory’, which posits that quantum particles are borne along on pilot-waves, just like how the oil drops were

A Bohmian Rhapsody

borne along pilot-waves when bouncing as observed at CNRS. Unlike other interpretations of quantum mechanics, such as the Copenhagen Interpretation or Many-Worlds theories, the Bohmian interpretation (which, in fact, precedes the Copenhagen Interpretation) does not consider observers or the act of observation as necessary for the predictions of quantum mechanics to hold true. It is a ‘quantum theory without observers’, if you will. In the Bohmian formulation, an individual quantum system is formed by a point particle and a guiding wave. Wavefunctions are quantities that mathematically describe the wave characteristics of a particle. While most quantum theories suggest you can describe a system by wavefunctions alone, Bohmian mechanics states that a quantum system is fundamentally about the behaviour of particles. The particle nature of matter becomes primary while the wavefunction is secondary. These particles can be described by their positions, and Bohmian mechanics discusses how they change with time. Here is a simplistic overview of what Bohmian mechanics entails. The state of a system of N particles is described by its wavefunction, a complex function that varies based on the possible configurations of the system, and its actual configuration, defined by the actual positions of its particles. The behaviour of the system is then described by two evolution equations, the famous Schrödinger’s Equation, which describes how the wavefunction changes with time, and a Guiding Equation, which describes how the position of the particles changes. Researchers who have spent time analysing the Bohmian idea with scientific rigour showed that Bohmian mechanics agrees with most, if not all, experiments in the quantum realm carried out up to now. Some of the best proofs of Bohmian mechanics have arisen from studying the characteristics of the particle and its guiding pilot-wave, and relating them with empirical evidence. More fascinatingly, Bohmian mechanics is an example of a hidden-variable theory. Hidden-variable theories are those that regard the Universe as inherently deterministic (“A must cause B and not C”) and only

Lent 2019


seemingly probabilistic (“A could cause B or C”) due to variables that we are not aware of – variables that are hidden. In Bohmian mechanics, the variable hidden from us is the position of the particle. Hidden-variable theories predict that the gradient-field of the hidden variable should be observable as a weak measurement, that is, a measurement that would not greatly affect the system by the act of measurement itself. In the case of Bohmian mechanics, this corresponds to the particle’s velocity. As such, weak measurements of particle velocities have been used in quantum experiments to track the trajectories of single photons. Although Bohmian mechanics resolves the issues of quantum wavefunction collapse and measurement quite nicely, it has a fair share of criticism. Many quantum physicists believe that Bohmian mechanics is useful for research. The Nobel laureate Steven Weinberg, in a private exchange of letters with colleague Sheldon Goldstein, wrote: “In any case, the basic reason for not paying attention to the Bohm approach is not some sort of ideological rigidity, but much simpler — it is just that we are all too busy with our own work to spend time on something that doesn’t seem likely to help us make progress with our real problems.” Tomas Bohr, fluid physicist at the Technical University of Denmark and grandson of the famous physicist Neils Bohr, recently also gave a strong argument against Bohmian mechanics in a thought experiment that could be its downfall. Nonetheless, for now, Bohmian mechanics remains one of the most fascinating interpretations of quantum mechanics today and is one of the last hidden-variable theories to survive the test of time. As I like to say: harmonious, this Bohmian rhapsody wafts along

"The Nobel laureate Steven Weinberg, in a private exchange of letters with colleague Sheldon Goldstein, wrote: “In any case, the basic reason for not paying attention to the Bohm approach is not some sort of ideological rigidity, but much simpler — it is just that we are all too busy with our own work to spend time on something that doesn’t seem likely to help us"

Mrittunjoy Guha Majumdar is a postdoctoral fellow at Cavendish Laboratory, working under Nobel laureat Prof. Brian Josephson. He is also the current Vice President of the Graduate Union. Artwork created from a photo by Brian Wolfe

Lent 2019

A Bohmian Rhapsody

17


The Earth as a Natural, Living Laboratory BlueSci presents three perspectives on how scientists have expanded our understanding of science using the greatest laboratory of all – planet Earth. We begin with a piece by Bryony Yates, on using Earth’s biosphere in the study of life H umans have long been fascinated with the natural world, as pre-historic paintings of plants and animals so beautifully illustrate. We can trace formal scientific study back to Ancient Greek philosophers, with Aristotle as the originator of systematic biological investigation. His student, Theophrastus, is considered one of the first ecologists. Travelling throughout Greece, he made many valuable observations about plants: their functions, properties and the importance of factors such as climate and soil to their growth. As scientific study, including biology, took off in the 18th and 19th centuries, biological specimens were increasingly brought into the laboratory. Scientists and enthusiasts scrutinised them under their microscopes, hoping to unlock the mysteries of life. This form of study, however, had its limitations for understanding how whole ecosystems function. Sadly, one cannot squeeze a tropical rainforest into a small laboratory; nor will isolating, killing and dissecting its component parts tell you much about this complex, dynamic network. The only way to truly understand an ecosystem is to go out and explore it. This approach has huge historical and contemporary value. The voyages of Darwin and Wallace, where they observed life’s diversity in its ecological and geographical contexts, were the inspiration behind the theory of evolution by natural selection – a theory underpinning all of modern biology. Even today, survey work, remains the cornerstone of ecology, with ecologists on the ground, trapping, observing and counting. Many readers may perhaps have experienced this, retaining fond (or not) memories of flinging quadrats around a school field. This and other sampling methods have remained relatively unchanged for decades. However, even these studies are limited in their scope: a team of scientists can never observe every plant, animal and fungus in a rainforest. The sheer amount of labour involved, accessibility issues and numerous other factors make this

18

Focus

impossible. Therefore, they must sample what they can and extrapolate (sometimes using sophisticated mathematical models) to draw conclusions about the habitat as a whole. Over the last few decades, new technologies have begun to change this, allowing us to understand biological diversity on regional and even global scales.

“The voyages of Darwin and Wallace were the inspiration behind the theor y of evolution by natural selection – a theor y underpinning all of modern biolog y.”

Lent 2019


FOCUS Studying the Earth from Space |

If you want to see the big picture, sometimes you must take a step back – tens of thousands of kilometres back! Satellites orbiting the earth are used as platforms for remote sensing; the longest-running Earth Observation programme, NASA’s Landsat, has been running since the 1970s. The satellites are fitted with sensors that measure the intensities of various wavelengths of light reflecting off the planet’s surface. These measurements can be hugely informative. For example, red light is strongly absorbed by chlorophyll (the green pigment in all plant leaves), whereas near infrared light is scattered. Therefore, the ratio of red to infrared light that is reflected from the surface can be used to construct maps of canopy “greenness” to monitor plant leafiness, tree health and detect deforestation. These techniques are not just limited to the study of terrestrial vegetation. Collectively, marine phytoplankton (tiny photosynthetic algae and bacteria) carry out almost as much photosynthesis as land plants, and yet they are poorly understood and difficult to study. Remote sensing can distinguish different lightabsorbing pigments to identify different algal groups and create distribution maps that span the whole globe. These data can also be used to track harmful toxic algal blooms, estimate marine productivity and can even be correlated with other data, such as nutrient abundance and temperature, to help us understand the factors affecting algal distribution. Satellite imaging is brilliant for mapping ecosystems in 2-dimensional space but to get an idea of the 3-dimensional structure of a habitat, we need to use alternative methods. LiDAR (Light Detection and Ranging) involves an airborne laser that directs pulses of light towards the ground. This light is reflected by the surfaces it hits and a sensor on the aircraft detects this, calculating the distance travelled. This information, combined this with GPS, is used to produce a detailed model of surface characteristics. This can give a useful insight into forest canopy structure, land topography and can even be adapted to measure seafloor elevations, by using water-penetrating green light.

Molecules Matter | Remote surveying provides an excellent way to get a broad, descriptive overview of whole ecosystems. However, it can miss the finer details regarding individual species and their ecological roles. Capturing these requires a drastic change in scale: we must zoom in to focus on the microscopic molecules of life. The first draft of the human genome sequence was published in 2001. It took many labs several years and an estimated $300M to produce. Now, however, technology had moved on and genome sequencing is faster and cheaper than ever. This has given rise to the metagenomics. In this approach, scientists take environmental samples and sequence all of the DNA present. This is particularly revolutionary for our understanding of the microbial world, which, since so Lent 2019

Focus

19


FOCUS

In and Out of the Lab: A Case Study |

Perhaps the most powerful way to understand the natural world is by combining our natural laboratory with our artificial one. A challenge of many natural biological studies is that they only reveal correlations. These can be useful for generating hypotheses, but we need rigorous laboratory-based investigation to test them. A good example comes from the Tara Oceans Project (. Metabarcoding showed that the distributions of an acoel flatworm and green microalga were very tightly correlated, strongly hinting at some sort of ecological interaction. When they brought these worms into the lab and looked under the microscope, they saw that the two species were indeed intimately acquainted: they observed algal cells living inside the worms, and the DNA of these cells matched those of the environmental sample. many species cannot be cultured in the lab, is chronically understudied. Metagenomics has yielded insight into ecosystems as varied as the Atlantic Ocean and the inside of the human gut! The Tara Oceans Project is one of the largest ocean sampling projects to date, and has enabled metagenomic analysis of the global marine microbiome. In 2009, The schooner Tara, set off on her two-and-a-half year expedition around the globe, during which the scientists on-board collected ~35,000 samples from over 200 ocean stations. They collected DNA, morphological and contextual (light, chemical composition etc.) data, focusing on organisms <10mm in size. To analyse the DNA, two approaches were used. In an approach known as metabarcoding, a specific gene or “barcode” is sequenced from all the DNA collected. The choice of gene is important: it must be present in all species that are being analysed and vary such that it can usefully discriminate between species. This approach shows the number of species in a sample and their relative abundances. Metagenomics, on the other hand, sequences the entirety of the DNA. This is used less frequently, since it is slower and more expensive, but provides a much better insight into what might be going on in these communities: if we know all/ most of the genes each member of the community has, we can then infer their ecological roles. Using these approaches, the Tara Oceans project has vastly expanded our understanding of the marine microbial world. Metabarcoding revealed new diversity, with ~1/3 of sequences from eukaryotes (cells with nuclei) not matching any known reference. Correlating species data with environmental information has indicated what factors shape community composition; correlating the abundances of unrelated organisms has shed light on 20

Focus

hitherto unknown ecological relationships. Furthermore, identifying common themes between these data and those from the human gut has revealed fundamental properties of microbial communities. Molecular analysis doesn’t stop at DNA, and a suite of new tools are starting to come to the fore. Metatranscriptomics and metaproteomics, the analyses of gene expression and proteins respectively, could allow us to make more reliable inferences about the functional dynamics of microbial communities. This will show us how the genes detected by metagenomics are actually put to use. As the study of our planet’s natural biological laboratory becomes more high-tech, scientists are learning more about living things than Darwin, Wallace and their contemporaries could ever have imagined. We are now able to study huge swathes of biodiversity from beyond the Earth’s surface and interrogate the very molecules that make up its organisms. However, the field ecologists need not hang up their muddied walking boots any time soon; rather, the future will see them more frequently joining forces with molecular biologists, data scientists, mathematicians etc. By combining work on Earth’s natural laboratory with that from experimental and digital ones, we can gain truly remarkable insights into the natural world.

How do materials look and behave at high temperatures of thousands of degrees and pressures of thousands of atmospheres? While laboratory reaction chambers may yet be able to create such conditions, Hannah Bryant explains how we may be able to gather clues from the centre of our planet The secrets of the Earth lie not only in the complex natural phenomena that govern our weather, landscape and oceans, but also those that occur in the sub-surface. The deeper we descend into the crust, the more uncertain we become about the processes that occur there, due to our inability to reach the depths needed to observe these events first hand. At 12,262m deep, the Kola Superdeep Borehole in Russia the furthest we have reached below the ground, yet it is a mere 0.2% of the Earth’s radius. Thus, when questioning the mechanics of the mantle and core, our only available techniques of measurement are to either replicate these conditions in the lab or to use the Earth as our reaction vessel and try to study it from the surface. This natural laboratory, where the Earth is our sample, can allow us to investigate timescales and conditions that simply can never be replicated in a lab setting. Lent 2019


The mineralogy of the mantle is fairly comprehensively understood. Volcanoes occasionally throw up mantle rocks in the lava that erupt, yielding kimberlite samples from as deep as 450 km below the surface. This process does not extend to core samples and thus the composition and structure of these minerals at the high temperature and pressure conditions they experience is not immediately obvious. Using meteorite samples, which likely share the same composition as the solar nebula from which our planet formed, geochemists have determined the core composition to be largely a mixture of iron and nickel. Since not enough of these elements are present in the crust and mantle, they must be present in the core to enable the same relative proportions of elements as the rest of the solar system. A persistent question in core studies is how the iron and nickel is structured. This has been explored in diamond anvil cell experiments, where a sample is bombarded by X-rays while being squeezed between two diamonds to pressures matching core conditions. By analysing the diffraction image formed from the interaction of these X-rays with the crystal structure, geophysicists have shown that one structure iron and nickel could take is the so-called hexagonal close packing ‘hcp’, which enables an efficient 12 fold coordination. So, theoretically, iron and nickel should arrange themselves to favour higher density and more efficiently packed structures at high pressures. However, not only is the core deep; it is also hot. An opposite effect takes place when at high temperatures: the particles have more energy and so favour lower density structures that provide more space for vibration. This suggests that at core temperatures, iron and nickel could take up a more open body centred cubic ‘bcc’ structure instead. So, which is the correct structure? This question is difficult to resolve because there is conflicting evidence in the behaviour of the alloy in simultaneously high pressures of 330 GPa and temperatures of 5700K (close to the

temperature of the surface of our Sun). Perhaps seismic data holds the key. A map of how pressure waves released from earthquakes propagate through the Earth could tell us when the composition of the material they pass through changes. These waves change in velocity when they hit new boundaries such as changes in density or mineralogy and are reflected or refracted depending on the nature of these margins. This enables us to build up an image of the changing layers within our planet. In particular, we can unravel the structure of iron in the inner core by looking at the velocities of waves travelling through it and from this can determine the density. A recent study by Guoyin Shen and his colleagues at the Carnegie Institute of Washington indicated that these shear waves were seen to travel faster through the core when moving between the poles of the core than the equator. This has been termed the ‘seismic anisotropy’ and suggests that the structures must have different alignments along their three crystal axes – a requirement satisfied only by the ‘bcc’ structure. Seismic anomalies are almost always explained by structural variations in minerals, as the stability of these structures change with pressure and depth. They are thus a useful tool in probing what the Earth is made of in the deep Earth. But why should we care about knowing these structures in the first place? Crystal structures are closely linked to their material properties, which in turn affect the processes involving them. For instance, seismic anomalies revealed a new material at the base of the mantle, made of magnesium silicate in a distorted orthorhombic crystal structure. It has been shown to have better electrical conductivity than first expected, which directly affects electromagnetic interactions that influences the behaviour of Earth’s magnetic field known for protecting us from solar radiation. Also, thermal conductivity studies showed that heat flows through this material, and hence the base of the mantle, by radiation as opposed to

The ‘hexagonal close packed’ structure allows every atom in a lattice to touch 12 others -- 6 around, 3 above and 3 below. This is known as ‘12-fold coordination’ >>

Lent 2019

Focus

21


FOCUS conduction or convection. As the core crystallises, it releases latent heat which drives mantle convection and ultimately plate tectonics. Knowing how this heat is transferred throughout the mantle thus gives us a better picture of the origin of volcanoes, earthquakes, and mountain belts. Seismic studies are also used to explore phenomena occurring closer to the surface, such as plate tectonics. For example, in places where slabs of ocean crust subduct, or sink, into the mantle underneath another tectonic plate, we see lots of earthquakes originating at unusual depths of up to 670km. This is strange, as the material the slab is sinking into becomes more hot and less rigid with depth; there should be fewer earthquakes, not more. Studies later revealed that these deep earthquakes are caused by cracks forming when a mineral in the slab, olivine, converts to spinel. This transformation should have occurred by 400 km, but because the downgoing slab is so cold, olivine is able to maintain its structure until 600 km where it is begins to be sufficiently heated to turn into spinel. The difference in strength in the two materials causes large cracks to form, releasing energy in the form of earthquakes. This process caused the 2013 Okhotsk Sea earthquake in east Russia, which could be felt as far away as Tokyo, Japan. Using the Earth to study processes we cannot recreate enables us to better understand how and why natural disasters occur and can allow us to be better equipped to deal with them. There is also evidence to suggest these tectonic mechanisms could be instrumental in the origin of life, as the recycling crust can allow heat and gases to be released onto the ocean floor and provide the vital ingredients for the first cells to begin replication. Studying the way our dynamic Earth moves allows us to therefore not only trace back our planet’s past, but also look into the history of life itself.

Beyond serving as a natural reaction chamber for extreme pressure and temperature conditions, our planet also records vast amounts of data through its dynamic processes. Information on past climate is readily encoded in rocks, trees and ice cores. Atmospheric scientist Andrea Chlebikova tells a story of how we could use this record to probe Earth’s climate conditions in the past – all without fancy time machines still out of our reach Nowadays, we can obtain accurate records of current atmospheric and ocean conditions across many places on Earth in almost real time, thanks to extensive monitoring networks, consisting of remote sensing satellites. If we are interested in what the Earth was like before we began to collate logged data on a large scale, we can start by exploring ship logs, which provide a wealth of weather, ocean and sea ice observations from many different locations in a standardised format. The Old Weather Project,

22

Focus

which started in 2010, has turned this into a citizen science initiative, with volunteers digitising and transcribing naval data from the 19th and early 20th century. However, going back much further in time proves impossible to do in this manner, as we are limited by when specific measuring instruments were invented, standardised scales were adopted, and the technology became widespread. The first reliable thermometer along with an associated temperature scale was developed by Daniel Gabriel Fahrenheit in the early 18th century. At the time, not even nitrogen and oxygen had been discovered as elements yet; measurements of trace constituents of our atmosphere did not begin until the late 19th century. To obtain temperature information that goes back further than a few centuries, or composition information that goes back further than a few decades, we are reliant on using proxies. Dendroclimatology, looking at the properties of annual tree rings to learn about the local climatic conditions at the time, may seem like a far-fetched idea at first--after all, there are a great number of variables that control the growth of a tree, including not just climate factors such as temperature or precipitation, but also individual genetic variation, and local soil conditions. This is true, but by carefully selecting the type of trees examined (trees higher up a mountain are less likely to be limited by a lack of sunshine and rain for example, and more affected by temperature variation, while the opposite applies lower down the same mountain) and by collecting sufficiently many samples (allowing us to account for variation), it has been possible to construct temperature records at annual resolution that go back thousands of years, although we are only able to use this technique in some parts of the world. Interestingly, historical records predating measuring instruments can still serve a useful purpose by providing reference points in the timeline. We are also able to extract information from ice cores, where historic air samples are trapped in the form of bubbles. The isotopic signatures of the water making up the ice have meanwhile provided us with a way to reconstruct temperature records going back up to 800000 years, as the relative rate at which the different isotopes evaporate, condense, and get incorporated into the ice depends on temperature. To go back in time even further, we need to look at geological samples, where we may find species representative of a particular climate, or once again temperature-dependent isotopic and chemical signatures. Taken individually, no proxy is particularly reliable, but by seeing how the information we obtain from different sources aligns, we can recreate reliable records of past climates and past atmospheric concentrations of trace gases. The British Antarctic Survey based in Cambridge is one place where such research takes place, with researchers such as Anna Jones and Eric Wolff, among other things, developing ice-core-based proxies and constructing paleoclimate and paleoatmosphere records. It is important to bear in mind Lent 2019


FOCUS

that in order to combine and interpret the raw data from the proxies, we are always relying on models to reconstruct the actual climatological variables we are interested in. In fact, any observation we make in the environment corresponds to the outcome of the many complex physical, chemical and biological processes occurring over different timescales under the specific setup of our planet at the time. It is true that the proxy data is convoluted more than data which is the result of direct observations. However, even to make sense of direct measurements, we rely on using models---otherwise, we would merely be cataloguing information without gaining an insight into the underlying science controlling what we see. We can nowadays take direct field measurements over a vast range of different conditions, the number and extremes of which we cannot hope to recreate in the laboratory. A difficulty presented by using field research is that we step outside the carefully controlled conditions we try to use in many conventional experiments: we cannot observe the change in one dependent variable as we gradually increase the value of the independent variable and try to keep all other variables constant. The types of models we have to use for understanding these real-world observations are thus far more complicated and cannot be expressed by means of a simple function, or displayed in a single graph. But with the aid of increasing computational power, we are now able to use and test even very complex earth system models against observations, though there a few issues. The large number of parameters means large amounts of data are needed to finetune the model. In areas where quality observational data is easily available, this is not an issue, but not all aspects of our planet’s environment are equally carefully monitored. We are also unable to go back in time to install measuring instruments to better study conditions Lent 2019

present in the past, though the reconstructed records we have are proving very useful for testing our models, to see whether they accurately predict the phenomena observed within the expected uncertainty. However, for some research problems involving rare events, such as the aftermath of certain types of large volcanic eruptions, we do not have a lot of data to verify our models against. If we use data from all the observations we have for a certain instance of a phenomenon in order to represent it in the model as best we can, we then lose the ability to predict the consequences of this phenomenon in other cases. To take a step back, there are fundamentally different ways of approaching the problem of modelling a system as complex as the Earth. We can start with our understanding of the physical laws which must ultimately govern it (i.e. a synthetic approach), or we can start with our observations of the complex system and try to build our models around these (i.e. an analytical approach). A completely undirected analytical approach, for example by using machine learning technologies, is unlikely to prove particularly productive for this complicated a system, and has the additional drawback of rarely providing us with a mechanistic insight into why and how certain processes occur; we may, therefore, lack the ability to predict the occurrence of phenomena that the model was not built around. On a pragmatic level, we also cannot pursue a purely synthetic approach for our planet, at least not at the moment---we simply do not have the computational power to do so. That does not preclude us from including physical processes in a parameterised fashion, or applying them to large volumes at a time rather than individual molecules, but we are then aware that these are approximations, and we need to refine and test them against observations to see whether they are satisfactory. From a philosopher’s perspective, we are therefore using a combination of synthetic and analytical approaches in practice. With the fundamental physical laws governing processes in ‘everyday regimes’ well understood, our focus for improving our models, beyond making them run faster, needs to be on improving the approximations we use by introducing changes and testing simulation runs obtained against unseen observations. Once again, this highlights the importance of field data to us, and a key question is what kinds of data both our models and scientific understanding would most benefit from. As we are already wishing we had more historical data, what measurements should we focus on collecting now to best help the Earth system scientists of the future? Hannah Bryant is a second year earth scientist at Magdalene College, Andrea Chlebikova is an atmospheric chemistry PhD student at St Catherine’s College, and Bryony Yates is a third year plant scientist at Newnham College. Artwork by Serene Dhawan (algae, barometer) and Seán Herron

Focus

23


Sulawesi: A Seismological Mystery The Sulawesi earthquake should not have produced tsunamis, but it did. Ben Johnson speaks to Professor James Jackson about how it happened, and how we could prepare for future incidents On 28th September 2018 at around 3pm local time, the residents of the city of Palu, Indonesia felt an earthquake of magnitude 6.1. Damage to several buildings was sustained, ten people were injured, and at least one person was killed. Three hours later, an earthquake ten times larger hit Palu, and this time the consequences were more severe. The magnitude 7.5 mainshock levelled many more buildings, killing or injuring those inside. Three minutes later, a 6m tsunami hit, destroying most things in its path, and carrying the debris on to generate more destruction. The death toll is uncertain, but sits above 1300 in Palu alone. This is a familiar story from this region of the world. The infamous 2004 Boxing Day Tsunami in Sumatra, which killed over 200,000 people, is another example of this kind of tragedy. What can be done to prevent these kinds of disasters from occurring? Nothing. The tectonic forces driving these motions are too large to think about interfering with. However, hope is not lost. By studying earthquakes and the lithosphere, events like these can be predicted, prepared for, and overcome. Tsunamis: An inevitable wave of destruction? | A success story here is from an island near Sumatra called Simeulue, also affected by the Boxing Day Tsunami. Unlike those on Sumatra, the inhabitants of Simeulue had an oral history, which spoke of an earthquake which happened in 1907 that produced a tsunami similar to that in 2004. Therefore, when they felt the earthquake and saw the sea recede, the inhabitants of Simeulue knew to get to high ground and avoid the tsunami. As a result, on an island of 75,000 people, only 7 were killed. Compare this with the 200,000 deaths in Northern Sumatra. So, what links earthquakes and tsunamis, and how do we know if an earthquake will cause a tsunami? I spoke to Professor James Jackson, the Professor of Active Tectonics at the Earth Sciences Department in Cambridge to find out. “Tsunamis happen because you displace the seafloor. Normally what happens with big tsunamis like Sumatra [in 2004] and Japan [in 2011], is the seafloor is moved upward very quickly by a thrust event. The water has nowhere to go, so the sea surface is displaced, and a wave flows away. In Palu [Sulawesi] the seafloor moved horizontally rather than vertically,” Prof Jackson explains. This means it’s difficult to explain the tsunami by the seafloor moving. However, the earthquake happened on the edge of a continental shelf. This could have generated an “underwater landslide” which have been known to generate very big waves. An example of this happened off the coast of Newfoundland in 1929. In this case, the underwater landslide could be monitored, as it cut the transatlantic submarine cables! In Palu, the geography was particularly unfortunate. The valley in which Palu sits opens towards the origin of the tsunami, so the wave was effectively funnelled inland and the energy wasn’t allowed to spread sideways. By contrast, in the neighbouring island of Borneo, 150 km away, the effects of the wave were not felt as much. Now we can link earthquakes to tsunamis, the next questions to 24

Sulawesi: A Seismological Mystery

Map data: Google

consider are: where do earthquakes happen, how often do they happen, how big are they, and do they have tsunamigenic mechanisms? Predicting earthquakes to reduce the magnitude of impact | Earthquake prediction down to the minute, hour or even year is something of which we are not yet capable. But sometimes science can help us say where we expect earthquakes to be likely soon. An example of this is the 1989 Loma Prieta earthquake, which occurred along the San Andreas Fault in California. We obviously expect earthquakes here, but what was special about Loma Prieta was that scientists had already been expecting an earthquake along that segment of fault. A ‘seismic gap’ had been observed: a region of little or no seismicity, surrounded by regions of micro-seismicity. The micro-seismicity was too small to be felt on land, and so was no danger. In terms of stress and strain, the seismic gap was effectively locked, and accumulating tension, while the micro-seismic region was releasing this tension over a long period of time. Therefore, at some point, the fault around Loma Prieta was bound to build up so much tension that it snapped, releasing all the energy it had accumulated very quickly in a large earthquake. Now, how can we tell how often earthquakes might happen? Studies have found that, as one might expect, earthquakes operate in cycles with certain repeat times. This is because large earthquakes represent a gradual accumulation of strain, which is then released suddenly in a large earthquake. Following this, there is some slow post-seismic deformation where subsidence or uplift happens on a monthly/yearly timescale, followed by accumulation of more strain, and so on. This is called the Earthquake Cycle. One way to predict future earthquake frequency is by looking into the past. The historical record can be very useful to seismologists. Earthquakes are often quite extreme and unexpected events, so historians are bound to write them down.

Lent 2019


In 365 AD, historical records speak of an earthquake and tsunami which devastated part of the Egyptian city of Alexandria, and various other ancient cities around the Mediterranean. Other similar events are noted at 551 AD and 1303 AD. The cause for these events have all been quite unclear, however, as they are all historical; we have no record of these earthquakes using seismometers. How then do we figure out the mechanisms and repeat times and assess the hazard in the Mediterranean? The Mediterranean has a lot on its plate | In the late 2000s researchers from Cambridge set out to figure out whether the 365 AD earthquake and tsunami could have originated off the coast of Crete on the Hellenic Subduction zone, where the African Plate is subducting beneath the Aegean Plate. But how does one determine whether slip occurred on an underwater fault thousands of years ago? In this study, they used shoreline uplift in Crete, recorded by wave-cut notches into the uplifted shoreline. This could be dated using 14C dating on remains of corals which were alive when the shoreline was lower. They found that most of the observed uplift occurred around 365 AD plus or minus a few decades. After this, they were able to use the spatial distribution of uplift to work out the location of the slipping fault. To determine repeat time, the convergence rate of the two plates (measured by a

network of high-accuracy GPS stations) could be combined with knowledge of how this fault accumulates energy. Their answer was 6000 years repeat time on this fault. Not a major hazard for the next few thousand years. However, these kind of earthquakes are suggested to occur all along the Hellenic Subduction Zone, and this is just one point on that line. If this suggestion is true, we can expect earthquakes of this kind approximately every 800 years in the Mediterranean. The last event was in Rhodes in 1303AD. Therefore, we may be due another tsunamigenic earthquake soon. This is a sobering message, but there is some hope of mitigating the impact of these disasters. Through education of tsunami protocol, lives can be saved. The example from the island of Simeulue shows this. Furthermore, research into tectonics around the world can help us identify regions where risk is high, and target education and infrastructural preparations at these areas

"If this suggestion is true, we can expect earthquakes of this kind approximately every 800 years in the Mediterranean. The last event was in Rhodes in 1303AD. Therefore, we may be due another tsunamigenic earthquake soon"

Ben Johnson is a fourth year Earth Scientist at Trinity College. Figures by Ben Johnson

Map data: Google

Lent 2019

Sulawesi: A Seismological Mystery

25


Our Global Food Supply and the Rise of Synthetic Meat Sophie Cook asks whether lab-grown meat could save the planet, improve our health, and keep our beloved Sunday roast on the table

meat production methods are becoming increasingly unsustainable We currently use an area larger than the surface of the moon to feed and rear our livestock. This accounts for 60-80% of all agricultural land and 30% of our planet’s total land surface • Agriculture accounts for 92% of human freshwater use, with a third of that being used for animal products. Despite this, 1 billion people have no access to safe drinking water • The livestock sector is responsible for 18% of total anthropogenic greenhouse gas emissions—a larger share than generated by the transport industry. It accounts for 37% of our methane, 64% of our ammonia, and 65% of our nitrous oxide emissions • Cows only convert 15% of their feed mass to meat mass, making them very inefficient. To put this in context, a diesel engine, which many consider to be inefficient and polluting, is 45% efficient 26

The Rise of Synthetic Meat

Lent 2019


Rich in protein, vitamins, and minerals, meat is a dietary staple in most communities worldwide. Paleontological evidence shows that our love affair began 2.6 million years ago. We learned to hunt, cook, and preserve our meat before animal domestication brought it straight to our chopping boards. Ever since, global meat consumption has grown exponentially and is not forecasted to stop any time soon. It is estimated that by 2050, there will be a 73% increase in demand, despite only a 30% increase in population. Churchill famously remarked that someday, “we shall escape the absurdity of growing a whole chicken in order to eat the breast or wing, by growing these parts separately under a suitable medium.” With global resources already being stretched to their limits, this prophecy is beginning to sound increasingly less like science fiction. But what’s the solution? Global vegetarianism? Meat replacement products like soy, tofu, or even insects? I can hear the meat-lovers among us groaning. No—I doubt we are going to give up that easily. Cultured meat may provide the answer, and at a fraction of the environmental cost. Cultured meat is the growth and development of stem cells in vitro to produce mature muscle cells. A harmless biopsy extracts skeletal muscle tissue containing dedicated stem cells (called myosatellites) from the animal of choice. These cells are isolated and placed in a medium containing all the nutrients required for natural cell growth. Each myosatellite is capable of multiplying around 40 times, meaning that from 1 muscle cell, we can grow more than 1 trillion cells. They proliferate in a bioreactor, and after significant expansion, changes in the growth medium trigger natural differentiation and the formation of myotubes composed of 10-50 cells about 0.3mm long. The myotubes are laid in a ring around a collagen gel hub. The tension provided by the gel mimics natural muscle contraction, which stimulates hypertrophy (increase in size). Skeletal muscle fibres form around the gel hub, which can then be layered together to give a piece of mature muscle tissue resembling mince. This process was first achieved on a small scale by Professor Mark Post and his team at Maastricht University in the Netherlands. Their first beef burger was partly funded by Google co-founder Sergey Brin, and was cooked and tasted in August 2013. Despite this success, there are still several challenges facing this technology before cultured meat can appear on our supermarket shelves. The initial isolation of the satellite cells means the meat is lacking in other cell types—most noticeably, fat cells, adipocytes, a lack of which compromises the flavour. However, adipose (fat) tissue also contains stem cells capable of differentiating into mature adipocytes and these could be re-incorporated into the muscle fibres. The addition of other cell types provides an exciting opportunity to nutritionally enhance the meat, but remains a sensitive issue: the media’s ‘frankenmeat’ labels have led to consumers thinking cultured meat involves genetic modification, even though it does not. The addition of extra nutrients, such as omega-3 fatty acids as suggested by Professor Post, may only fuel consumer resistance. However, when you consider the 8 million births Lent 2019

via IVF in the last 40 years, or the extra vitamins we spray on our breakfast cereals, perhaps this resistance is unfounded. Another issue is the composition of the growth medium, which initially contained foetal bovine serum. The bloodbased serum means the meat is not currently ‘cruelty-free’ and the serum itself is unsustainable. However, research into the use of photosynthetic Cyanobacteria is well under way and should provide a viable alternative. Scaling up production raise the issue of diffusion distances. When the cells reach a critical mass, nutrient uptake by diffusion is limited by the reduced surface area: volume ratio. On an industrial scale, this will need to be considered; research into vessel systems through the cells is in progress. To upscale the process, the meat needs to be grown in large bioreactors equipped with ideal growth conditions and the ability to mechanically stimulate the growing cells. The largest bioreactor currently available has a volume of 25,000 litres, which could only feed around 10,000 people. Instead, an Israeli biotech company has proposed that we should encourage business owners to grow their own meat locally in smaller-scale bioreactors. The company recently received investment from the American meat-giant Tyson Foods—further evidence that even the big players know change is inevitable. Although these challenges are currently limiting commercialisation, none are insurmountable. Only time and money stand in the way of industrialisation success. The average American consumes 100 kilograms of meat each year. If we used cultured meat instead, it has been predicted that we could sustain global meat demand with only 440,000 cows: a miniscule percentage of the current 1.5 billion. Cultured meat would also provide a solution to many of the shocking statistics I mentioned earlier. Researchers in Oxford have estimated that it would use 99% less space, 96% less water, and emit 96% less greenhouse gases. The bottom line though has to be cost and consumer acceptance. The first burger cost $300,000. This has dropped to around $11, making it 9 times more expensive than conventional mince. Obviously, this is still too high, but if it were price competitive, would you eat cultured meat? In a survey by The Guardian, 69% said they would. Since 2012, 30 laboratories worldwide have declared that they are undertaking cultured meat research. I was lucky enough to attend the International Conference on Cultured Meat where I met many of these scientists. Most of them were vegans who said that if cultured meat became commercially available, they would go back to eating meat. They are working to preserve meat for future generations, as current production methods will not be viable for much longer. Humans are innovative and adaptable. If we cannot have children, we will conceive them in vitro. If crop yields are poor, we will genetically enhance them in vitro. Why then, when cultured meat involves no genetic modification, should we not grow our meat in vitro

“If we used [synthetic] meat .... it has been predicted that we could sustain global meat demand with only 440,000 cows: a miniscule percentage of the current 1.5 billion ... Researchers in Oxford have estimated that it would use 99% less space, 96% less water and emit 96% less greenhouse gases”

Sophie Cook is a second year Natural Scientist at Magdalene College. Artwork by Sean O'Brien The Rise of Synthetic Meat

27


A New Possible Mechanism for the Formation of our World's Core Gareth Hart speaks with Dr Madeleine Berg and Dr Geoff Bromiley about evidencing a new hypothesis

"Instead of separating large seas of molten rock, metallic melt could also slowly move along the grain boundaries of silicate solid to the centre of the planet, much like how water percolates through coffee grains in drip coffee"

28

If Cambridge had a ‘core’, where would it be? Market Square? Cindies on a Wednesday night? A weighted average between the Sidgwick and Downing sites based on student numbers? While a core for Cambridge may be difficult to define, the mechanisms by which that core would form is clear: social congregation, movement from central locations to lecture theatres and coffee shops, strange timetabling choices and so on. For our planet Earth, we have the opposite problem. The existence of a core is well known - it is an iron(Fe)-rich region of the Earth’s interior several thousand kilometres beneath our feet, the grey centre of a multi-layered Earth that adorns many a geography textbook. The magnetic field it generates facilitates our navigation on the planet’s surface, even today. But we are less sure of how the core actually formed. Here is the story so far. Planets first form by the accretion (clumping together) of material by gravity. Nature enjoys stability, and so gravity causes the materials to separate by density. Such differentiation eventually forms the core over time. Meteorites made entirely of iron are evidence for this – believed to be ancient planetary cores, they clearly show that the metallic components must have somehow separated from the mineral silicate components, mostly minerals of silicon and oxygen that make up the mantle. The mainstream explanation is that of ‘magma oceans’. Energetic collisions with other rocky bodies, along with rapid and short-lived radioactivity, melted much of the early Earth into large oceans of liquid magma. Being liquid, the magma separated into its immiscible components like a mixture of oil and water; low-density phases would flow to the top and dense metallic phases would sink to the bottom. This nicely explains why rocks on the surface of the earth and the moon contain low-density minerals such as plagioclase. So far so good for large planets like Earth. But what about small bodies, like asteroids? These bodies would have cooled very quickly due to their small size, so any magma oceans that existed on these bodies would have quickly solidified before they had a chance to un-mix into their metal and silicate parts. Yet many of them contain metal cores, as evidenced by their magnetic fields. There is another possibility - ‘percolation’. Instead of separating large seas of molten rock, metallic melt could

The Formation of our World's core

also slowly move along the grain boundaries of silicate solid to the centre of the planet, much like how water percolates through coffee grains in drip coffee. Percolation could take place at temperatures as cool as 500 0C, the freezing point of iron-rich melts. Without requiring quite as much heat and melting, percolation neatly solves the paradox of small bodies possessing cores. But just like drip coffee, melt takes a very long time to find its way to the bottom of the silicate material. Would the melt have been able to travel through the grains to the centre of the planet in time before it cooled and solidified? University of Edinburgh geologists Dr Madeleine Berg and Dr Geoff Bromiley might have an answer. Dr Bromiley, a former Cantab geologist, gave a talk on their recent research at the Department of Earth Sciences early last term. A key requirement for percolation is channelisation. If tiny blobs of melt, previously disjointed, meet and coalesce into one another, they could form larger streams of continuous flow. These channels transport melt so effectively that 98% of the melt could be drained out of them. Dr Berg’s previous research showed that channels can be formed by applying a shear strain - a fancy term for deforming an object along its surface. This is a bit like the strain you apply when you stretch an eraser along its long edge, when you wring a cloth or when you twist a sponge. By shearing the silicate, disconnected melt blobs could flow and connect together, forming channels and kickstarting percolation. Armed with this knowledge, Dr Berg and Dr Bromiley set out to recreate the conditions of the early planetary interior. As one might imagine, this is not easy. Not only did they have to generate temperatures continuously of hundreds of degrees, they also had to create vast pressures of around three gigapascals – all while maintaining molten metal liquid within the setup. And to complicate matters, they had to introduce the twisting strain to induce channelisation. Enter the Rotational Tomography Paris-Edinburgh Cell. Invented by Dr Julien Phillipe and Dr Yann Godec at the Pierre and Marie Curie University, the RoToPEc squeezes a gasket enveloping the sample between two large hydraulically-powered diamond anvils, delivering gigapascals of pressure through the sample. Two rotational motors, one at the top of the sample and the other at the bottom, can be programmed to spin at different rates to twist the sample. Lent 2019


The team put RoToPEc to the test at European Synchrotron Radiation Facility in Grenoble, France. Here, protons whizz through circular particle accelerators and generate luminous X-rays which they could use to probe their sample, a cylinder of solid boron nitride containing liquid gold – an experimental analogue for the silicate and iron-rich melt that would have been found in the primordial Earth. By spinning this chamber in front of a X-ray beam and a camera while heating, compressing and deforming it, the team was able to construct a three-dimensional image of what is happening within. This is called in-situ tomography - like a medical computerised tomography (CT) scan, individual two-dimensional snapshots are taken at different angles and reconstructed in a computer to produce a threedimensional image. Lent 2019

The results were astounding: not only did they reproduce the channels as predicted by Dr Berg, they were able to obtain high resolution snapshots of melt features actually moving through the thick labyrinth of solid mineral grains. This allowed them to track their positions with time and extract the speeds at which they travelled: about two to two hundred micrometres per hour. That number may seem underwhelmingly slow. Extrapolating across the vastness of geological time however, this means that the melt could travel the radius of the Earth within as short as tens of millions of years. And this is but an upper estimate. If percolation took place throughout the mantle, progressively larger melt channels could be created, forming micrometre-wide, melt-rich bands that act as tranport superhighways. Calculations show that these bands would allow most of the metallic melt to drain into a core within a mere four million years "No one's going to suggest that magma oceans don't exist, but core formation by percolation is something we really do have to consider as it’s entirely possible to have regions of a planet that are never processed by magma oceans,” Dr Bromiley explained. What, then, lies ahead? As technology progresses, Dr Bromiley hopes to explore how percolation varies with pressure, temperature, mantle composition and depth. With a better understanding of core formation processes, perhaps we could one day unravel the mysteries that lie in the centre of our planet - including how it came into being

"No one's going to suggest that magma oceans don't exist, but core formation by percolation is something we really do have to consider as it’s entirely possible to have regions of a planet that are never processed by magma oceans” - Dr Geoff Bromiley

Gareth Hart is a first year Earth Scientist at Magdalene College. Artwork by Ben Tindal

The Formation of our World's core

29


Science and Theatre: The Wider Earth The wider Earth follows the voyage of Charles Darwin after he graduated from Christ’s College, Cambridge. Excited by the prospect of adventure, Darwin joined the HMS Beagle on an expedition to chart the coastline of South America in the role of a naturalist who would collect natural history specimens. His discoveries on the expedition were the starting point of Darwin’s theory of evolution by natural selection, and, a few months after returning to England, Darwin drew his famous branching tree diagram in a notebook. From delicate butterflies to giant tortoises, the lifelike puppets bring The Wider Earth to life. They give the audience some idea of the awe and curiosity that drove Darwin in his work. The creators - Dead Puppet Society - researched each creature in detail, studying anatomical drawings and photographs, and taking field trips to see the animals in their natural habitats. The actors, too, researched the movements and behaviours of the animals. Through study they could master concepts such as the focus of the puppet, its breath, and its ability to give an illusion of weight and gravity. Nicholas Paine from Dead Puppet Society says that “the process of bringing a puppet to life on stage takes an incredible degree of commitment and discipline. Unlike an actor who spends a rehearsal period developing a character, a puppet has to first learn how to be alive before we can even start to wonder as to what its character might be. Ultimately, the process isn’t complete until the imagination of an audience turns the movement cues that we give into the illusion of life”

Laura Nunez-Mulder is a medical student at Emmanuel College currently working as Editorial Scholar at the BMJ. Twitter: @lnm_rugby. Image courtesy of the Natural History Museum and production companies

30

Pavilion

Lent 2019


The Wider Earth is presented by Trish Wadley Productions, Dead Puppet Society and Glass Half Full Productions in association with Queensland Theatre in a partnership project with the Natural History Museum. The Wider Earth is currently playing in London, and booking until 24 February 2019. BlueSci readers can use discount code STUDENT for 50% off seats in bands A, B and C.

Lent 2019

Pavilion

31


Weird and Wonderful Shining light into the dark: are man-made moons the future? There are plans afoot to launch an artificial moon into the skies above the city of Chengdu, in the southwestern Sichuan province of China.The Chinese state media report hopes that the artificial moon will be launched by 2020, and if successful suggest that three further Chinese cities will have man-made moonlight by 2022. The proposed artificial moon is a satellite with a reflective surface.The idea is that the reflected sunlight will complement the natural moonlight and city street lighting, being able to illuminate an area of radius 5–40 km.Wu Chunfeng, chairman of Chengdu Aerospace Science and Technology Microelectronics System Research Institute Co., Ltd., estimates that the man-made moon could save the city ~£133 million per year in electricity.The orbital distance from the Earth will be a mere 500 km (the moon is 384,000 km away).While this will make the artificial moon brighter than the moon’s light,Wu told China Daily “its expected brightness, in the eyes of humans, is around one-fifth of normal streetlights.” The idea purportedly originated with a French artist who imagined a mirror necklace in the night sky to light the streets of Paris.The reality maybe somewhat less romantic with worries about light pollution and the effect on local wildlife vh

Bitcoin mining uses more energy than gold mining Digital cryptocurrencies are often seen as modern alternative to wealth stores like gold. A recent study claims that more energy is consumed producing one dollar of Bitcoin, the largest cryptocurrency, than in producing one dollar’s worth of gold. Thousands of cryptocurrencies are in circulation today and many of these rely on blockchain technology. A blockchain is a secure public record of the cryptocurrency’s transactions. New blocks for new transactions are added to the chain by ‘miners’. A difficult calculation must be completed for the block to be added and the first miner to complete the calculation is rewarded with newly generated coins. As

A selection of the wackiest research in the world of science

cryptocurrency demand increases, the difficulty of the calculations d

? In

recent years , electronic communications have become a core part of how we interact. Emoji play an important role in this. These quirky icons are more than a bit of fun: they allow us to better convey our intended meaning in the absence of facial expressions and tone of voice. Even the Oxford Dictionaries have recognised their significance, selecting “face with tears of joy” as their 2015 “Word of the Year”. Vikas O’Reilly-Shah and colleagues, in an article published in the BMJ, thus raise an interesting question: could emoji have a place in formal scientific literature? Emoji could be usefully employed to modify emotional tone and convey subtext in communications such as editorials and peer review . Emoji could also save space – a picture is worth a thousand words, after all - creating more room to publish and potentially lead to higher acceptance rates in print journals. Emoji are already found in the literature as the subject of studies, in the fields of psychology, communications and linguistics. Perhaps it wouldn’t be a huge step to start integrating them in a communicative context. However, while emojis might look innocent, their deployment in formal literature has its dangers . Crucially, they risk miscommunication. Their meaning can change over time and they can take on alternative meanings, some less innocent than originally intended . There is also variability in how different cultures interpret them: some would view “victory hand” ( ) as a peace symbol, others see an offensive gesture. Although at least one paper has already used an emoji in its abstract, emoji are associated with informal communications (essentially a form of slang) and so are unlikely to be taken seriously in scientific literature any time soon . Perhaps as the younger, emoji-fluent generation rise up through the scientific ranks, they may find it only natural to incorporate this fundamental part of our language by

Lent 2019


All your journals in one app

Everything you need to keep up-to-date with the latest academic papers.

Use RESEARCHER™ across all devices

DOWNLOAD FOR FREE

Filter your feed using keywords

Synchronise with your reference manager

or use at www.researcher-app.com



Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.