BlueSci Issue 45 - Summer 2019

Page 1

Summer 2019 Issue 45 www.bluesci.co.uk

Cambridge University science magazine

FOCUS

Genetic Engineering in Humans The Ultimate Pandora’s Box?

The Sixth Mass Extinction . Magic Squid Brains in a Dish . Directed Evolution


Your Power for Health

Cell Culture ROC K S when you have the right gear WATCH THE VIDEO

Tel: 01453 825255

email: sales.uk@gbo.com

www.gbo.com

Ensuite rooms and studios, with social spaces, in-house gym and all bills included. Rooms from ÂŁ190/week. Come and take a look around.

3 Histon Road, CB4 3BF

+44 (0)122 3948 835

castlehill@nidostudent.com

www.nidostudent.com


Contents

Cambridge University science magazine

Features 6

Regulars On The Cover News Reviews

Mastering the Art of Conversation Shavindra Jayasekera looks into IBM’s project debater, the artificial intelligence that can spar with the best humans

16

8

Are two brains better than one?

10

Pelagic magic: counter-illumination illusion in nocturnal waters

12

Tête-à-tête with a Cavendish Luminary

14

Earth’s Sixth Mass Extinction

22

De-extinction and CRISPR conservation

24

The first scientist was a woman

Leia Judge investigates how emerging technologies in neuroscience raise questions on biology and the ethics of consciousness

Amy Williams explores the art of disappearance in her winning submission to our “science in magic” competition in conjunction with the Academy of Magic and Science Mrittunjoy Guha Majumdar interviews Nobel Laureate Professor Brian Josephson about his work at the Cavendish Laboratory

Alavya Dhughana asks Professor of Palaeobiology Erin Saupe whether today’s animals are as doomed as the dinosaurs Adam Searle explores the science behind and social implications of bringing animals back from extinction. Rosamund Powell recounts the history of women in science and the struggles faced then and today.

24

2 4 5

In the search for a cure Debbie Ho explores recent advances in modelling Marfan’s syndrome using induced pluripotent stem cells

BlueSci was established in 2004 to provide a student forum for science communication. As the longest running science magazine in Cambridge, BlueSci publishes the best science writing from across the University each term. We combine high quality writing with stunning images to provide fascinating yet accessible science to everyone. But BlueSci does not stop there. At www.bluesci.co.uk, we have extra articles, regular news stories, podcasts and science films to inform and entertain between print issues. Produced entirely by members of the University, the diversity of expertise and talent combine to produce a unique science experience

Summer 2019

FOCUS GENETIC ENGINEERING IN HUMANS

BlueSci explores the biology and ethics of human genetic engineering with discussion from Prof. Eric S Lander, director of the Broad Institute and author of a recent call for a moratorium on editing of embryos.

Evolving Frankenstein’s monsters

28

Lemuel Szeto explores the chemistry behind directed evolution and how we can use it to birth useful monsters

Pavillion: Emilia Tikka

30

BlueSci delves into the implications of novel biotechnologies with artist Emilia Tikka

Weird and Wonderful

32

With night vision, time reversal and bioengineered coral reefs, we delve into the mire of strange science

President: Seán Thór Herron ����������������������������������������������������������������������� president@bluesci.co.uk Managing Editors: Alexander Bates, Laura Nunez-Mulder.........managing-editor@bluesci.co.uk Secretary: Mrittunjoy Majumdar.......................................... �������������������������enquiries@bluesci.co.uk Treasurer: Atreyi Chakrabarty �������������������������������������������������������������� membership@bluesci.co.uk Film Editor: Tanja Fuchsberger ������������������������������������������������������������������������������ film@bluesci.co.uk Radio: Emma Werner.............................................................................................radio@bluesci.co.uk News Editor: Elsa Loissel �������������������������������������������������������������������������������������news@bluesci.co.uk Web Editor: Elsa Loissel.............................................................................web-editor@bluesci.co.uk Webmaster: Adina Wineman.....................................................................webmaster@blueci.co.uk Art Editor: Serene Dhawan.........................................................................art-editor@bluesci.co.uk

Contents

1


Issue 45: Summer 2019 Issue Editor: Andrew Malcolm Managing Editors: Alex Bates, Laura Nunez-Mulder Second editors: Alexandra Ekvik, Charlene Tang, Bethany Aykroyd, Bryony Yates, Matthew Zhang, Andrew Malcolm, Maya Petek, Salvador Buse, Michael Denham, Seán Thór Herron, Mona Lui, Thea Elvin, Sarah Foster, Catherine Dabrowska, Sarah Lindsay, Matthew Brady Art Editor: Serene Dhawan News Team:Victoria Honour, Mrittunjoy Guha Majumdar, Matthew Brady Reviews: Maya Petek, Maeve Madigan, Keaghan Yaxley Feature Writers: Shavindra Jayasekera, Leia Judge, Amy Williams, Mrittunjoy Guha Majumdar, Alavya Dhungana, Rosamund Powell, Adam Searle, Debbie Ho, Lemuel Szeto Focus Team: Dominic Hall, June Park, Andrew Malcolm Weird and Wonderful: Lucy Mackie, Alexandra Ekvik, Jake Rose Production Team: Alex Bates, Seán Thór Herron, Andrew Malcolm Caption Writer: Alex Bates Copy Editors: Ilinca Aricescu Alex Bates, Seán Thór Herron, Laia Serratosa Advertiser: Christina Turner Illustrators: Sonia Aguera, Eva Pillai, Serene Dhawan, Alex Bates, Evan Hamilton, Priti Mohandas, Joseph Jones, Andrew Malcolm

Cover image: Áine Ní Chaomhánach ISSN 1748-6920

This work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported License (unless marked by a ©, in which case the copyright remains with the original rights holder). To view a copy of this license, visit http://creativecommons. org/licenses/by-nc-nd/3.0/ or send a letter to Creative Commons, 444 Castro Street, Suite 900, Mountain View, California, 94041, USA.

2

Editorial

Frankenstein’s Monster OVER 200 YEARS ago, Mary Shelley published her novel Frankenstein, or the Modern Prometheus. Its message was on exercising caution and not over-stepping the bounds of technical ability. Human beings have long sought-after mastery over the natural world but have often fallen short in asking whether we should. This issue’s focus centres on this paradigm after the dismaying events last year in Hong Kong with the first genome-edited babies born without ethical approval. We discuss how this came to be and chat with Professor Eric Lander of the Broad Institute about his thoughts on the matter and his call for a framework for international and societal consultation for this kind of advance. This theme is echoed in a number of articles in the issue. On the precipice of a sixth mass extinction, Alavya discusses the need for targeting future conservation work by study of today’s species, whilst Adam explores the science and ethics of de-extinction. Shavindra and Leia challenge what we regard as the mind and consciousness with their pieces on advances in AI and cerebral organoids. Lemuel examines Frankenstein’s monster on the nanoscale in his piece on directed molecular evolution. In our pavillion, the work of Berlin-based artist Emilia Tikka invites the reader to consider the societal and philosophical implications of our use of technology. Elsewhere, Rosamund Powell brings to light the legacy of the forgotten female scientists whose work paved the way and addresses the challenge that women in science still face today. On the front of personalised medicine, Debbie Ho describes recent efforts to model Marfan’s disease using a patient’s own cells. Mrittunjoy interviews Nobel laureate Prof. Brian Josephson on his pioneering work on superconductivity. Amy Williams recounts a tale of deception in her winning piece for our ‘Magic and Science’ competition. It is paramount as we push the frontiers of our understanding of the world to remember how intrinsically linked with it we are. Whilst we advance science to improve healthcare, improve society, fix our climate and change our world, we need to remember, as Shelley aptly spotlighted, not to overstep the bounds of science. To consider the path of all those before us, especially those ignored and written out of history. To think to the future and consider how our work will affect not only us and the place we hold in the world but also the countless species we share it with Andrew Malcolm Issue Editor #45

On the Cover READING PROFESSOR LANDER’S interview, I was struck by how conversations about human germline editing are so often conversations about fear. It got me thinking about what the human genome means to us. We are moved to safeguard it, to erect protective barriers and bureaucratic regulations so we feel some control. But this same significance also draws us to hop the fence and want to touch it for ourselves. I thought a background sunset might reflect the urgency felt by scientists for a resolution. For some however, there already may be too much red tape, and for others there may never be enough Áine Ní Chaomhánach Cover Artist

Summer 2019


Summer 2019

On the Cover

3


News A network of eight radio telescopes spanning locations

EHT

Check out www.bluesci.org, our Facebook page or @BlueSci on Twitter for regular science news

in various continents, from Antarctica to Europe and South America, called the Event Horizon Telescope (EHT) has captured the first image of a black hole ever. In a project that involved more than 200 scientists, the latest achievement of the team marks a milestone in the study of the enigma that black holes are. Einstein’s general relativity first laid the theoretical groundwork for predicting the existence of black holes, although Einstein himself was skeptical about their existence. Since neither matter nor light can escape a black hole, a black hole itself actually cannot be seen. However, the event horizon of a black hole can be illuminated and this is exactly what the latest capture has done, with the first glimpse of a black hole’s accretion disc produced by a high resolution result obtained after combining data from eight of the world’s leading radio observatories including the South Pole telescope. The particles within the observed black hole’s accretion disc are heated up to billions of degrees as they move around the black at speeds close to the speed of light and emit radiation, which the EHT picked up, before falling into the blackhole. Next up on the list of the EHT collaboration’s pursuits is the production of an image of the Milky Way’s black hole MM

I need my sleep – insights from flies challenge traditional notions The complaint ‘I need my sleep’ may have become

a thing of the past with the release of some new research from Imperial College. In a paper entitled Most sleep does not serve vital function: Evidence from Drosophila melanogaster in the scientific journal Science Advances, researchers deprived flies of sleep. Using motion detection software calibrated using over 4000 days of fly activity, flies micromovements were monitored. If, after a 20 second period, no micromovement is detected, the fly’s test-tube was shaken for one second - preventing it from getting any significant rest. What was the effect of this on flies’ survival? The results were surprisingly gender skewed, with no statistically significant impact on lifespan for male flies but a reduction in median lifespan for female files of 3.5 days. Similarly, in the control group, no relationship was observed between amount of sleep and lifespan. Why is this the case? The question remains unanswered, but it is suggested that either the flies were getting minute periods of sleep allowing them to operate in their limited lab environment, or that sleep is less necessary for vital functions than we had ocne thought: perhaps going from out on a Sunday night straight to our 9am lecture on Monday isn’t so bad MB

hannah davis

Scientists capture the first image of a black hole event horizon

Chimp chatter: how similar is it to human communication? Recently, researchers describe how the rules

of human linguistics apply to communication between chimpanzees in the wild. Human communication conforms to a series of rules regardless of the language spoken; typically, the most common words used are short (this is Zipf’s law of abbreviation), and large language structures are comprised of multiple short segments such as syllables (this is Menzerath’s law). Chimpanzees use a combination of hand gestures, body posture, facial expressions, and various noises to communicate. Using video footage from Uganda’s Budongo Forest Reserve, 58 unique chimpanzee communication gestures were identified in the study. The most commonly used gestures were short, and longer

gestures were frequently broken down into multiple short gestures, suggesting that chimpanzee communication is underpinned by the same basic mathematical principlesw as human communication. Such a finding builds on previous work in which human toddlers and chimpanzees were suggested to have similar communication structures. In future the researchers plan to expand their study to bonobos – a species known to use similar gestures to chimpanzees – to see if their findings are applicable to different primate species VH

ROD WADDINGTON

4

News

Summer 2019


Reviews Bedside Rounds - Adam Rodman

@AdamRodman MD 2019

Bedside Rounds is, in the words of its author: “a tiny podcast about fascinating stories in clinical medicine, focusing on wonderful, weird, and fundamentally human stories.” It is mostly a oneman show by an American doctor who started it to tell stories about the history of medicine. The episodes span a range of topics, from obscure human stories to discussion of medical ethics, and typically focus on a single treatment or condition per episode. Each episode is a clinically informed narrative about how something we consider common knowledge today (for example, listening to the lungs with a stethoscope or giving intravenous fluids to dehydrated patients) was first discovered and introduced into practice -- and about the often many twists and turns with which the innovation was received. The podcast is captivating because it mixes the process of scientific and medical discovery with the very human context in which it happened – a combination that sums up modern medicine MP

“...a tiny podcast about fascinating stories in clinical medicine, focusing on wonderful, weird, and fundamentally human stories”

Brief Answers to the Big Questions - Stephen Hawking

Hodder & Stoughton 2018

How did it all begin? What is inside a black hole? Is time-travel possible? Stephen Hawking’s posthumously published Brief Answers to the Big Questions takes us through these questions and more, providing brief yet deep and thought-provoking discussions of each. Underlying each question is the uncertainty associated with the future: what it will hold in store for humans, whether a ‘theory of everything’ will ever be found and whether we will be capable of keeping up with our own technological advances. Hawking begins by providing an honest view on the question of the existence of a god and the beginning of the universe. After giving an overview of his own work on black holes, he moves on to deal with his fears around the threat of developments such as artificial intelligence, nuclear weapons and climate change. Readers familiar with Hawking’s A Brief History of Time will find Brief Answers to the Big Questions less technical in comparison. This reflects the fact that the primary aim of this book is not to communicate scientific details, instead, this book succeeds in its goal of convincing us of the importance of the scientific literacy of our society in understanding and facing these threats to our future MM

“Underlying each question is the uncertainty associated with the future: what it will hold in store for humans, whether a ‘theory of everything’ will ever be found and whether we will be capable of keeping up with our own technological advances”

The Tangled Tree - David Quammen

William Collins 2018

Summer 2019

While David Quammen’s The Tangled Tree doesn’t quite deliver the radical new history of life its subtitle promises, it is nonetheless a compelling piece of popular science. Ostensibly, the books primary focus is on horizontal gene transfer (HGT), that is the exchange of genetic material between unrelated species. Yet the first half is really a history of evolutionary biology and the fields most powerful metaphor, the Tree of Life. The book only really begins to challenge the usefulness of that metaphor in the closing chapters. The books great strength is Quammen’s decision to cast the researchers as the central characters and drivers of the narrative. Of course, the book starts with Darwin but Quammen goes to great lengths to highlight the contributions of his predecessors, contemporaries and successors. Carl Woese, the microbiologist discovered the domain Archaea, is central to the narrative, as is Lynn Margulis, the evolutionary biologist who developed the theory of endosymbiosis. It is the efforts, talents and flaws of these researchers that make this piece of scientific history an engaging read KY

“The book’s great strength is Quammen’s decision to cast the researchers as the central characters and drivers of the narrative”

Reviews

5


Mastering the Art of Conversation Training AI to Debate and Inquire Shavindra Jayasekera looks into IBM's Project Debater "Welcome to the Future" These were the ominous opening words of IBM’s Project Debater in a debate against reigning world champion Harish Natarajan in February 2019. Project Debater is the first AI system capable of debating humans on complex topics, and has been in development since 2012. The atmosphere in the room was electric. Harish holds the world record for the number of debating competition wins, so his defeat would herald yet another milestone in the rise of AI, on par with Deep Blue’s victory against chess grandmaster Garry Kasparov in 1996 (another project lead by IBM). The set-up of the debate was simple. Both debaters were given 15 minutes to prepare for the motion: “We should subsidise preschools”. Project Debater then delivered a speech, followed by one from Harish, with both participants allowed to deliver a rebuttal at the end. The audience voted on the most convincing speaker to determine the winner. Unlike in 1996, or indeed in the 2016 AlphaGo vs Lee Seedol Go match, the AI did not emerge as the victor from this encounter. However, as detailed below, Project Debater represents a huge step forwards in argument-synthesising AI, and I believe that it will only be a matter of time before AIs "Part of me are able to outclass us in debate as they do at chess. thought that Argument Mining | The first challenge facing the IBM debating is an incredible team was programming a computer to research, write, and challenge for deliver a persuasive speech on a topic it had never seen AI, and that I before. Forming a convincing argument requires claims and should be able supporting evidence. While Project Debater had access to as to beat it ... much information as it could need - a databank of around But another ten billion sentences, from an assortment of newspapers and part of me journals - the difficulty lay in selecting relevant information thought surely and processing it within the allotted 15 minutes. IBM had to before the game of chess develop effective AI natural language processing techniques against Deep capable of distinguishing between evidence and claims and Blue every deciding what was relevant to the motion. This process of single grand extracting and identifying these argumentative structures master would from large bodies of texts is known as argument mining. have said a In order to perform these rapid, efficient calculations IBM chess machine used deep neural networks: sets of algorithms loosely modelled cannot on the human brain and designed to recognise patterns in compete large data sets – ideal for identifying claims in documents. with Gary Kasparov" This is a difficult task, even when parsing highly relevant - Harich articles: a study found that, on average, only 2% of sentences Natarajan, in Wikipedia articles contain 'Context Dependent Claims'

6

Training AI to Debate and Inquire

(CDCs) – concise statements that directly support or contest given topics. For instance, consider the following sentence: “Because violence in video games is interactive and not passive, critics such as Dave Grossman and Jack Thompson argue that violence in games hardens children to unethical acts, calling first-person shooter games “murder simulators”, although no conclusive evidence has supported this belief.” Only the emboldened text contains the CDC. Thus, to help identify the CDC more quickly, IBM used markers such as the word “that” (which commonly prefaces a claim) as well as a list of other “claim phrases” to search for in the document. In addition, the AI analysed large data sets of articles with annotated CDCs to find patterns in the syntax where CDCs are present. Sentiment Analysis | Once a CDC has been identified, the next challenge is to determine whether this claim agrees or disagrees with the motion that Project Debater is trying to argue. There have been many previous models that attempt to identify the sentiment of a claim, all of which look at markers such as positive or negative syntax. Though they are broadly successful, they still struggle to classify statements which don’t convey explicit sentiment. Take for example: “The people, not the members of one family, should be sovereign” To a human reader, the statement clearly expresses a negative sentiment towards the monarchy. However, there is no direct reference to monarchy, and furthermore, the idea of sovereignty is abstract, and therefore hard for a machine to grasp. To overcome this, IBM improved its model by expanding its focus to include the material surrounding the sentence in which the claim is made, to provide a better understanding of context. Project Debater now considers: • Headers: the title of a section (e.g. “Advantages”, “Criticisms”) can suggest inherent positive or negative sentiment • Neighbouring sentences/claims: IBM found 88% of neighbouring claims have the same sentiment, and so understanding an easier claim can shed light on a more complex one next door

Summer 2019


"What the machine is better at than any human could ever be is finding relevant evidence – studies, examples, cases – and get that context. Another thing I think is very impressive was not only its ability to present evidence, but also its ability to explain why it mattered in the context of the debate” - Harich Natarajan, after his debating victory against IBM's Project Debater

Project Debater is thus able to give each claim a score indicating the confidence it has that the claim has been classified correctly... Confidence is important, as debaters would typically want to present only high-confidence claims, to avoid the risk of being rebutted later during the debate. Now, armed with a few hundred highly relevant and wellclassified text samples, Project Debater removes redundant claims and organises the strongest arguments into a speech. Delivering a speech | The final challenge is performing a speech in front of a crowd of hundreds of eager AI enthusiasts. Whilst text-to-speech systems have existed for many decades (primitive versions were first produced in the 1980s), the voice that is produced sounds distinctly robotic, due to the dull monotonic delivery and the awkward pauses in the middle of phrases. In order to perform successfully in the debate, the IBM team had to focus on Project Debater’s delivery. This partially involved the use of pre-programmed humour and idioms, which exploited the unusual nature of an AI debater. In addition, deep neural networks were also used to determine natural phrase breaks and points of emphasis in sentences. The resulting effect was a smooth, natural and expressive voice. Room for improvement | Despite all of these technological breakthroughs, Project Debater did not emerge victorious – Harish was the clear winner despite IBM’s best efforts. According to Harish, although “the machine is better than any human could ever be in finding relevant evidence”, it ultimately “struggled to respond to the more subtle claims [he] made”. This suggests that the AI still hasn’t come to grips with the full intricacies of natural language and thus cannot respond to more complex arguments.

Summer 2019

Furthermore, Harish comments that the most convincing debaters not only use logic but also play with the emotions of the audience. Thus, in order for Project Debater to defeat a human opponent, it will need to learn how to interpret the emotions of a crowd and respond accordingly. This spontaneity is out of the scope of modern-day AI. However, Harish points out that when you break down the art of debating into its constituent parts, there is nothing stopping an AI from beating a human at any one of them. The future of Project Debater | It cannot be denied that IBM has made major advancements in the field of AI and natural language processing. Over the course of the last two decades, we have seen great leaps in the comprehension of language: from early, unreliable spam filters, to AI systems such as IBM’s Watson, which can respond to openended questions. Project Debater is the next level in this evolution, being able to tackle more complex topics and engage in full debates. In terms of usefulness to humanity, however, Project Debater’s most valuable development is its ability to read large data sets and focus on the most relevant information. A human could spend a day researching something in depth and would not be able to read a fraction of what Project Debater could analyse in a minute. In the era of big data and fake news, it is more critical than ever that we are provided with accurate and reliable information so that we can make well-informed decisions Shavindra Jayasekera is a first year Mathematics student at Trinity College. Art by Sonia Aguera. Twitter: @ immunosoni IG: @sonia.aguera

Training AI to Debate and Inquire

7


Are Two Brains Better Than One?

Leia Judge investigates how emerging technologies in neuroscience raise questions on biology and the ethics of consciousnes The sound of footsteps echoing across a room, the heat of the sun burning on your skin, colourful light dancing across the inside of your shut eyelids. These phenomena are part of our everyday lives, but have you ever considered what it means for us to be able to experience them? Every experience, sensation and thought is the result of our capability to be conscious. Consciousness, minimally the state of being aware of and responsive to one's surroundings, has intrigued philosophers and scientists alike for thousands of years. This is a highly complex and poorly understood process, orchestrated by billions of neurons forming intricate networks within our brain, firing in a display of complex yet choreographed synchrony. While it seems a 'mind' understanding itself is temendously difficult, understanding how our minds work is of both philosophical and scientific importance as efforts to reconstruct our most complex organ, the brain, in the laboratory gain momentum. dishing up brains | Smaller, simplified versions of brain tissue can now be grown in laboratories from stem cells – these are called cerebral organoids. Scientists have pioneered a method by which “mini-brains” can be grown in the lab from stem cells, with many labs across the world using these organoids in their research to investigate a wide range of important topics, including brain development, evolution and neurological diseases. While these cerebral organoids are far from fully functioning, autonomous brains, they contain many features of developing brains. The organoids form distinct regions,

8

Are Two Brains Better than One?

such as a cortex and grooves resembling those found on the surface of a true brain, along with features that, in a developing brain, would eventually become the brain stem and central nervous system. The largest organoids created in labs measure ~4mm in diameter, containing approximately 2-3 million cells. In stark contrast, an adult brain is 1,350 cm3 and contains at least 172 billion cells! At present, the possibility of organoids acquiring higher order properties, such as sentience, is highly remote. However, as these organoids become increasingly complex, the feasibility of such a feat may increase. In fact, some organoids in which retinal cells formed together with brain cells demonstrated an ability to fire neural impulses in response to light. Perhaps even more remarkably, in only March this year it was reported that an organoid was capable of spontaneously linking up with spinal cord and muscle tissue, leading to visible contractions of the muscles! Efforts to grow a 'brain-in-a-dish' are not an exercise in ghoulish meddling, a piece of Frankensteinian scientific hubris some 200 years after Mary Shelley’s tale. Instead, these labs aim to use such organoids to better understand normal and abnormal brain development, the latter of which can lead to diseases such as microcephaly, autism and intellectual disability. Moreover, this may potentially inform the development of more effective therapeutic strategies in the treatment of neurodevelopmental disorders.

Summer 2019


no brain, never mind | When considering the possibility of organoids gaining sentience, this raises the question of what consciousness is and how we can actually measure it. Is there a Turing test for consciousness? In short, no. David Chalmers is a philosopher and cognitive scientist in the area of philosophy of mind and language. In 1995, he wrote:

“It is undeniable that some organisms are subjects of experience. But the question of how it is that these systems are subjects of experience is perplexing. Why is it that when our cognitive systems engage in visual and auditory informationprocessing, we have visual or auditory experience: the quality of deep blue, the sensation of middle C? How can we explain why there is something it is like to entertain a mental image, or to experience an emotion? It is widely agreed that experience arises from a physical basis, but we have no good explanation of why and how it so arises. Why should physical processing give rise to a rich inner life at all? It seems objectively unreasonable that it should, and yet it does.” Chalmers is a main advocate of the “hard problem of consciousness”, which is the problem of explaining how we, as sentient beings, are capable of subjective experiences. Why can we experience the sensation of heat or pain, while a toaster or iron cannot? Describing this issue, Isaac Newton wrote in a 1672 letter, “to determine by what modes or actions light produceth in our minds the phantasm of colour is not so easie.” However, the relevance of a “hard problem” has been disputed by many philosophers and cognitive neuroscientists. For example, the philosopher Daniel Dennett believes scientists and philosophers should instead focus on the question of what happens once some content ‘enters consciousness’, what does this cause or enable or modify? In trying to answer such questions, there are two predominant scientific frameworks which attempt to relate consciousness to measurable scientific principles; the neural correlates of consciousness (NCC), and integrated information theory (IIT). Notable scientific figures such as Francis Crick and Christof Koch have made significant progress towards identifying neurobiological events which occur simultaneously with the experience of subjective consciousness, the NCC. While the NCC addresses which mechanisms are linked to consciousness, they do not address the underlying question of why this consciousness arises in the first place. In contrast, IIT, developed by Giulio Tononi, proposes an identity between consciousness and integrated information, which can be defined mathematically and thus is measurable in principle. In other words, it offers a way to analyse the brain, or brain-like structures, to determine if they are structured in a way which could give rise to consciousness. Koch has said “Information theory says yes, this piece of cortex will feel like something. It may feel very different. Because it doesn’t have an eye and an ear, it’s very unclear what it’s going to be conscious of. But in principle, this thing will experience something”. Summer 2019

In stark contrast, advocates of “epiphenomenalism” believe that our consciousness is merely a by-product of normal physiological processes. In this case, there is a possibility of a cerebral organoid gaining some semblance of consciousness where their similarities to true brains outnumber their dissimilarities. Thus, it becomes less of a question of “if?”, and more a question of “when?”. two brain or not two brain | Although both of these frameworks fail to fully explain how “brain-like” neural tissue needs to be in order to experience sentience, consideration of the philosophy of consciousness and ethics of brain research is essential. If a brain organoid produced by researchers in a laboratory would appear to have conscious experiences or subjective phenomenal states, what would this mean ethically? Would this “brain” deserve the protections given to human and animal research subjects? Indeed, when considering an ethical point of view, it has even been argued that it would be unethical to disband the idea of “brain surrogates” given the extent of suffering caused by neurological and psychiatric disorders. However, the closer the proxy becomes to a functional brain, the more ethically problematic it becomes. In May 2017, the Duke Initiative for Science & Society at Duke University in Durham, North Carolina, concluded there is a need for clear guidelines to be set out for researchers developing cerebral organoids, and similar technologies. Additionally, to comprehensively consider the implications of autonomy and sentience, such guidelines would need input from not only neuroscientists and stem-cell biologists, but also ethicists and philosophers. Cerebral organoids have enormous therapeutic potential, and whilst not fully developed, they form the basis of a thought-provoking exercise regarding the abstract nature of consciousness and how the standard definition of sentience could, in theory, be even more complicated than it already seems. Given the lack of a consensus on how and why consciousness arises, or how to measure it, it will be difficult to conclusively know if an organoid is conscious, further complicating the issue. So are two brains really better than one? From a therapeutic perspective the answer is overwhelmingly a yes; as new methods for more closely modelling development and disease and accurately screening new drugs become increasingly in demand. From a philosophical and ethical point of view, the jury is still out, but there is little to be concerned about at the moment or in the foreseeable future. We have no real frame of reference for considering such a topic, and while the likelihood of an anguished brain floating in a Matrixlike nutrient bath is remote, it is likely to remain a topic of conversation as the technology continues to develop. In a broader sense, as the scientific landscape moves its focus increasingly towards recapitulating 'humanness', it opens up a narrative surrounding consciousness, the right to autonomy and most strikingly what it means to be 'human'

" It is widely agreed that experience arises from a physical basis, but we have no good explanation of why and how it so arises. Why should physical processing give rise to a rich inner life at all? It seems objectively unreasonable that it should, and yet it does." - David Chalmers

Leia Judge is a 1st year PhD student in the department of Physiology, Development and Neuroscience and attends Corpus Christi College. Artwork by Eva Pillai Are Two Brains Better than One?

9


Pelagic magic: counter-illumination in nocturnal waters Amy Williams explores the art of disappearance in her winning submission to our “science in magic” competition in conjunction with the Academy of Magic and Science

CRISPR gene editing technology is currently creating a new wave of potential model organisms - biological research on animals like the bobtailed dquid and dwarf curttlefish is heating up for example neuroscientific work into how their nervous system generates their active camoflauge

10

As night falls on the coasts of Hawaii, thousands of bobtail squids emerge from the sands of their shallow seabeds to hunt. With them, they bring light to the surface of the ocean, making themselves masters of disguise against the illuminations of the night sky seen from underwater looking upward to underlying predators. However, we cannot give the squid full credit for this beautiful, bioluminescent

Magic Competition Winner

illusion, for after all there’s an inkling they are using some black magic… And the mechanism behind this ‘magic’ may have great potential for medical research. The relationship between the Hawaiian bobtail squid and the Aliivibrio fischeri bacterium is a truly remarkable natural phenomenon. Bioluminescent A. fischeri bacteria accumulate inside the squid’s specialised light organs to gain Summer 2019


the shelter and nutrients they need. As they build up in numbers, they produce light through “quorum sensing”— meaning that when the bacteria reach a critical density, expression of the gene for luciferase (the protein responsible for bioluminescence) is induced. Light intensity can be controlled through this density of bacteria, allowing the squid to camouflage against the moonlight and starlight as it moves across the surface of the water at night. Without this process, termed “counter-illumination”, the squid’s silhouette would appear visible to predators below by blocking the path of light from the stars penetrating the water. So what’s so fishy about this ‘black magic’? The relationship between the squid and the A. fischeri is not a stable one. At the break of dawn, the squid ejects the bacteria as bioluminescence is no longer needed, forcing the bacteria to find shelter and nutrients elsewhere (what happened to fidelity?). Consequently when the squid retreats back to its dormant state in the sand at the end of the night, it has to build up its bacterial bioluminescent battery again before it reemerges at dusk. Here, the host squid has to carefully uptake the correct bacteria into its light organ from the seawater or else the counter-illumination effect fails and the squid may succumb to predation. So, we know the effect relies on the squid’s so-called symbiotic relationship with bacteria. But how does the squid know which species of bacteria are ‘good’? The Hawaiian bobtail squid can discriminate between which bacteria can enter its light organs and which can’t— and it does so using cilia (which kind of act like bouncers at the door of a club). Through studying nascent squids exposed to A. fischeri under a microscope, we know that the squid uptakes the bacteria into a specific area within the light organ. To investigate whether the squid is ‘choosy’ in exactly what it uptakes (i.e. whether the process is passive or active), scientists from Wisconsin employed particles equal in size to bacteria. When presented to the squid instead of bacteria, these particles accumulated in the same area within the light organ, meaning that a mechanical process for this uptake within the host was at play. When repeated with larger particles: their greater size did not increase the chance of them accumulating, telling us the accumulation was not under the influence of a passive, size-dependent ‘trap’. What was discovered, however, was that the fluid mechanics of

the light organ’s cilia acted as a ‘mechanical gate’ by actively producing a vortex-like flow, which functioned to filter the particles by size. Following the routes the particles took over the ciliated surfaces of the light organ revealed two distinctive types of flow attributed to two different classes of cilia used for discrimination of particles. Whilst longer cilia created a vortex-like flow to filter particles by size, shorter cilia were found to beat randomly mixing the local flow keeping particles in place (where the ‘chosen’ bacteria can then be chemically prepared before they enter the light organ). Without this ‘bouncer’ function of the cilia, entry of bacteria within the light organ cannot be regulated; a potentially fatal problem if the correct bacteria (A. fischeri) are not found at the correct density within the light organ. The illusive counter-illumination effect would fail as a disguise against predators. So, what does this mean to us? The importance of ciliary function for partner choice in the host squid may hold applicable for the role of cilia in other mutualistic relationships. Cilia are highly conserved throughout nature, so analysing their mechanical function in the squidAliivibrio model could be used to investigate ciliated surfaces in humans. Many diseases within humans are caused by dysfunction of ciliated surfaces and/or their symbiotic relationships with bacteria. For example, in primary ciliary dyskinesia (PCD), defective movement of cilia results in a range of debilitating symptoms such as chronic respiratory congestion, recurrent sinus infections, and hearing loss. Furthermore, in inflammatory bowel disease, research into the role of microbiota within the gut is ever-growing. The squid-bacteria model may even shine light on the mechanisms behind fertility issues, for after all the sperm’s flagellum tail is just a modified cilium. Understanding the function of cilia in humans could therefore hold potential for the development of diagnostic tools and new medicines. So perhaps the glowing squids’ black magic wasn’t too bad after all? The future is looking bright, it’s time to get kraken Amy Williams is an MPhil student in the Department of Zoology and a member of Corpus Christi College. Artwork by Serene Dhawan

light from celestial bodies casts a clear shadow for most marine animals

light from the bobtail's light organ obscures its silhouette

Summer 2019

Magic Competition Winner

11


Tête-à-tête with a Cavendish Luminary: Professor Brian Josephson Mrittunjoy Guha Majumdar interviews Nobel Laureate Professor Brian Josephson about his work at the Cavendish Laboratory In 1962, a 22-year-old student predicted the mathematical relationships for the current and voltage across a weak link that couples two or more superconductors. Since then, the world of superconductivity has never looked back. That monumental achievement in the Cavendish laboratory not only made Prof. Brian D. Josephson the only Welshman in history to win the Nobel Prize in Physics (in 1973), but also made him a luminary in the field of condensed matter physics. Condensed matter physics broadly relates to the study of the physical properties of matter in ‘condensed’ form — in which the density of its constituent particles is high enough for their interactions to be crucial for these properties. One of the most groundbreaking discoveries in condensed matter physics in the second half of the 20th century was the Josephson effect: in a device, which consists of two or more superconductors coupled by a weak link, the Josephson effect describes a spontaneous current that flows indefinitely across the device without any voltage being applied. The man who discovered this effect was Prof. Josephson, currently emeritus professor of physics at the Cavendish Laboratory. Here, he is interviewed by Dr Mrittunjoy Guha Majumdar, his postdoctoral research associate who is working on emergent symmetries and gauge fields in physics. They discuss Prof. Josephson’s journey in physics and his work at the Cavendish Laboratory, which takes place at the intersection of a number of areas: the Josephson effect; the recent Mind-Matter Unification Project, which explores the physics of the mind; coordination dynamics, which deal with coordination between components of a physical system; and the idea of synergy, which refers to the combined effects produced by two (or more) of these components in such systems. Unassuming, simple, and as brilliant as always, Brian Josephson is one of the last among the Cavendish giants in condensed matter physics after the likes of Sir Nevill Francis Mott and Philip Warren Anderson. How was your experience learning physics, as a student and a researcher, and then teaching at the Cavendish Laboratory over the years? I had a very helpful physics teacher at school. He lent me the book Theoretical Physics by Joos to read, and I was surprised to learn that you could actually calculate properties of matter (using quantum theory) rather than just measuring them. As

12

Tête-à-tête with a Cavendish Luminary

an undergraduate, I switched from Part II Maths to Part II Physics for my 3rd year, as what was studied in applied maths at that time had little to do with the real world. For my PhD project, I decided to do an experiment (on 'nonlinearity in superconductors at 178 MHz’), as the idea of sitting at desk all day did not appeal to me; though, of course, that is what I ended up doing later, when I switched to theory. Today, Josephson junctions have significant applications in quantum computing and digital electronics. Even the National Institute of Standards and Technology (NIST) standard for one volt was achieved with the help of an array of 20,208 Josephson junctions put in series! We would love to know how your discovery of the Josephson effect came about. I was the only person in the low temperature group who could understand the theory of superconductivity, so I needed to become expert in it so as to be able to offer advice. I learned of the existence of a kind of a wave in a superconductor through studying papers on the subject, and started to wonder if this was something real, as well as just a quantity in the theory. I realised after a bit that while the phase itself was unphysical, in a system with two superconductors, phase differences between the two systems could have consequences, if electrons could move from one to the other. Then, Ivar Giaever [with whom Prof. Josephson shared the Nobel Prize] came out with his junctions that fitted the requirements, and, later, a paper was published by Cohen et al. that calculated the behaviour of the junctions and justified Giaever's calculation. Fortunately for me, they could not figure out what the phases meant so did not deal with the two-superconductor case; so I did the calculation for it, and was surprised to discover the zero-voltage contribution which I was not expecting. Incidentally, while I was an undergraduate, I learnt about the Mössbauer effect and realised that relativity implied that the frequency should be a function of temperature. This had a significant influence on an experiment being carried out at Harwell, and rather baffled the researches who couldn’t figure out what was happening. I wrote to them, and, in response, they sent a uniformed driver to take me there so as to write the idea up for publication, which much impressed a friend who saw me being collected.

Summer 2019


Interdisciplinarity has become a major point of interest for groups such as Theoretical Condensed Matter Physics at the Cavendish today. In the world of physics today, one has research bridging fields as disparate as cosmological phenomena and condensed matter physics with, more generally and popularly, areas of biology and physics. What are your thoughts on the same? I think interdisciplinarity is important, but it tends to have more problems getting support than does work in a single discipline. Recently, you have started looking at the Physics of the Mind as part of the Mind-Matter Unification project. In this, you speak of how 'mind processes might be integrated into regular physics’. You are broadly exploring what may loosely be characterised as 'intelligent processes in nature’. Please tell us more. There seems to be a 'two cultures’ situation, in that physicists don’t know much about meaning, while people working on the role of meaning don’t treat it in the way physicists do. So there should be mileage in bringing the two approaches together to make a unified theory. A preprint I’ve posted recently has attracted considerable interest—in fact, on occasion, it is the most read paper of all of those published by people in the physics department.

You have always been a good science communicator, having spoken on a number of areas within physics and beyond. To be an effective scientist today, how important is good science communication? In this day and age, it is being able to communicate in great detail to funding organisations that matters, so as to be able to get the funding needed to do research. And you have to be able to say what your research will achieve before you have done it! It’s a crazy situation, just like the recent ruling of the department that emeritus staff such as myself will have to justify having access to the department—every six months in some cases (though there are suspicions that they actually mean every two years, but being scientists they got biannual and biennial mixed up in the text). Thank you for your time, Brian. Thank you Mrittunjoy Guha Majumdar is a postdoctoral fellow at Cavendish Laboratory. Artwork created by Alex Bates from a photo by Gingko-Biloba

You have lately been looking at coordination dynamics. How do you think these concepts can help us explore the nuances of `intelligent processes’ in the physical world? One of my interests in that connection is the phenomenon of language, where there are computer models that treat it very effectively in terms of a collection of units that work together. This, in a way, explains how some aspects of intelligence work, and the challenge is to treat this in a systematic fashion so as to understand it more clearly. We have recently started looking at ways in which systems work together to produce emergent symmetries and properties such as in condensed matter physics. We have also seen the emergence of gauge fields in this context. What are your thoughts on this area of research that ties research at the frontier of physics to your work on coordination dynamics? The point is basically that systems tend to settle down into symmetrical forms, something that can have mathematical expression. That is one of these facts that are 'true for no reason’, or, as the philosopher Merleau-Ponty put it, ’that is the way things are, and nobody can do anything about it’.

Summer 2019

Tête-à-tête with a Cavendish Luminary

13


The Earth's Sixth Mass Extinction Alavya Dhughana asks Professor of Palaeobiology Erin Saupe whether today's animals are as doomed as the dinosaurs Sixty-six million years ago, a large meteor hit the Gulf of Mexico resulting in worldwide tsunamis and wildfires. The impact has been reported as a trigger for the Cretaceous-Palaeogene (K-Pg) mass extinction event, wiping out three-quarters of plant and animal species on Earth, including non-avian dinosaurs. Mass extinction events like the K-Pg event have happened five times in Earth’s history. These events are characterised by a loss in species diversity by at least 75% in a short interval of time (usually less than five million years). Today, the Earth is experiencing rapid losses in species diversity, due to climate change and anthropogenic influence. The burning questions: will future scientists

14

The Earth's Sixth Mass Extinction

looking back at the rock record being formed today find that we are living through a mass extinction event? And can we use the existing fossil and geological record to inform us in modern conservation efforts? To answer these questions, future scientists would want to look for environmental changes such as fragmentation of natural habitats. Fragmentation occurs when larger, more expansive habitats break up into isolated fragments as parts of that environment are destroyed. 70% of Earth’s remaining forests are now within 1km of the forest’s edge. For instance, if you were to walk the equivalent distance from Emmanuel to Magdalene college in Cambridge from inside the most

Summer 2019


forested regions of the world today, you would no longer be in a However, unlike previous mass extinctions, the future scientist forested environment. Future scientists would be able to deduce would not find evidence of a major meteor impact or increased this from a rapid decrease in forest environments in the rock volcanic activity. Instead they may find that rapid climatic record. This change may already be evident, but future increases and environmental changes co-occurred with the expansion in agricultural land (up to 18% by the middle of this century) of one species - Homo Sapiens.With the expansion of human and areas occupied by urban centres (predicted to triple by agricultural land and urban centres, natural habitats become 2030) mean that it is likely to amplify, leaving a clear signal in increasingly fragmented. Anthropogenic emissions of CO2 and the future geological record. SO2 from burning fossil-fuels significantly impact the global Future scientists could also measure the levels of CO2 (carbon climate system. The resulting rapid change in natural ecosystems dioxide) and SO2 (sulphur dioxide) in the atmosphere to and climate ultimately impose significant stresses on organisms. characterise environmental changes. These two gases are thought But how might this affect the real world biota and how could to be a part of the mechanisms which can result in extinction this knowledge inform conservation efforts in the present day? and are typically emitted by volcanic events such as the Deccan Professor Saupe explains that “the fossil record provides us Traps volcanism, which has been suggested as a trigger for the with an archive of past extinctions that have occurred in response K-Pg mass extinction. Gas bubbles trapped in polar ice provide to abiotic and biotic perturbations.” Using this record, you can a direct measurement of Earth’s past atmospheric composition. “test if certain traits make species more prone to extinction. For For older periods, where we do not have an ice record, scientists example, species with small geographic ranges are consistently can use proxies, such as fossilised leaves, which show a change in found to go extinct more frequently and have shorter temporal the density of stomata (pores that facilitate gas exchange) with a durations than species with large geographic range sizes”. In change in CO2 level. SO2 forms sulphur aerosols that can have some modern isolated environments, such as Australia and the significant effects on regional and global climate. southern tip of Africa, as many as 90% of species can only exist SO2emissions have been increasing since the 1850s (the in one geographic region. approximate beginning of the industrial period), peaking in the Changes in the environment, climate and food sources are 1970s and declining slightly in the 2000s. However, in recent factors that can alter the range in which certain species live. years emissions have increased once again. CO2 is also a major We may be able to target conservation on species with smaller driver in Earth’s past and current climate system. Before the geographical ranges which, based on fossil data, are more likely industrial revolution, the atmospheric CO2 levels were at around to be prone to extinction. Furthermore, Professor Saupe explains 280ppm, but have risen to greater than 400ppm today. Climate that we can study how tolerant certain species are to changes models are unable to reproduce recent warming trends without in environmental conditions in the fossil record as a potential including this rise in CO2. Recent warming has already resulted indicator of how they may respond to current climatic stresses in retreating mountain glaciers, thinning of Arctic sea ice and sea (e.g. changes in temperature). She has worked on certain level rise. Continuing at the current rate, there will be double the bivalve (e.g. clams, oysters) and gastropod (e.g. snails, limpets) present amount of CO2 in the atmosphere in 50 years. species that have shown that their tolerances “did not change Comparing current extinction rates with those of previous significantly over a three-million-year interval, even in the face mass extinctions is another measure scientists can use to assess of climate change” at the macroscopic level. Insights like this can if we are living through a mass extinction. This can be difficult help conservation biologists to target their efforts towards certain as modern extinctions may be underestimated due to many species that do not exhibit higher levels of tolerance towards species being unevaluated or undescribed, and fossil extinction climatic stresses. The only way scientists are able to gain long rates may be difficult to determine due to preservational biases term insights into this is from the fossil record. (not every species going extinct will leave a trace in the fossil Humans have fundamentally altered Earth’s landscapes and record). Erin Saupe, a Professor of Palaeobiology from the atmospheric composition. This large and rapid change could be University of Oxford, explains that comparing modern and driving us through Earth’s sixth mass extinction event. Viewing fossil extinction rates “depends on the analytical method and current changes in the environment and biotic composition in the type of data used.” However, she mentions that a study from context of previous extinction events can help target future efforts Barnosky and colleagues from the University of California, in conservation and has the potential to give us long term insights Berkeley found that “current extinction rates are higher than that studies on living organisms cannot provide alone would be expected from the fossil record”. Current magnitudes of extinction (including threatened species) are estimated to be Alavya Dhungana is a 3rd year Earth Sciences student at 14% for birds, 29% for reptiles and 31% for amphibians. The Homerton college. Artwork by Eva Pillai. removal of certain key groups from ecosystems can have large impacts on ecosystem dynamics and as species are lost, this effect will only compound. Whilst extinction magnitudes have not yet reached mass extinction levels for species diversity loss (75% or above), some evidence suggests that current extinction rates are higher than average which could indicate that a mass extinction event is occurring. Summer 2019

" Whilst extinction magnitudes have not yet reached mass extinction levels for species diversity loss (75% or above), some evidence suggests that current extinction rates are higher than average which could indicate that a mass extinction event is occurring"

The Earth's Sixth Mass Extinction 15


Genetic Engineering of Humans Opening Pandora’s Box?

BlueSci explores the biology and ethics of human genetic engineering with discussion from Prof. Eric S Lander, director of the Broad Institute and author of a recent call for a moratorium on editing of embryos The ease with which modern science can edit genetic material suggests it could be a question of when, not if, we see the rise of germline editing therapies.

16

In recent years, developments in genome editing technologies have allowed scientists to manipulate human DNA with increased ease and precision. However, regulatory and ethical frameworks have not kept pace with the technology, leaving scientists without clear guidelines in a field already plagued by debates surrounding the safety and morality of its research. Traditional gene therapy typically relies on a desired gene being delivered to a particular cell type in the patient (often specific cell types, relevant to the disease), and subsequently incorporated into the genomes of those cells. Typically, this is carried out on cells that do not pass on their genetic information to their offspring, termed somatic cells, as opposed to germline cells such as the egg, sperm and zygote. Most controversial are those technologies that introduce heritable genetic changes, termed germline edits. Traditionally, such changes have only been introduced

Focus

into non-human animals, such as the famous ‘oncomouse’ engineered to be predisposed with an increased likelihood of developing cancer and numerous species of agricultural plant. However, in November 2018, the Chinese scientist He Jiankui claimed to have successfully edited the germlines of two human embryos, altering a gene to introduce HIV resistance. Although the scientific accuracy of this claim is debated, the response from the scientific community has been almost uniformly negative with some even describing the work as ‘monstrous’. It is unsurprising then, that the work has prompted a World Health Organisation (WHO) panel to demand a registry for human genome editing and some scientists to propose a complete moratorium on this line of research. However, the ease with which modern science can edit genetic material suggests it could be a question of when, not if, we see the rise of germline editing therapies. Has Pandora’s box already been opened? Summer 2019


A short history of genome editing Arguably, humans have been modifying their environment by genetic engineering since the beginning of civilisation itself. Indeed, evidence for selective breeding of plants and livestock extends back thousands of years before Gregor Mendel introduced the concept of inherited factors, which we now call genes, in the 19th century. Plants or animals with a desired trait- for example the largest fruit or best milk production- would be chosen by farmers to create the next generation in the hopes that their offspring would also exhibit that characteristic or phenotype. Since the genetic makeup of the plants underlays these phenotypes, breeders unknowingly manipulated the genomes of crops and livestock. Indeed, Mendel’s key contribution was to demonstrate the existence of units of inheritance that are transferred from one generation to the next. Adoption of early ideas concerning heredity and selective breeding contributed to moral catastrophes, such as the eugenics programmes of the early 20th century. However, these ideas eventually grew into the modern field of genetics which forms a cornerstone of modern biological research. The advent of genetic engineering in the modern sense of the term required a deeper knowledge of genetic material. In 1953, Watson, Crick, Franklin and Wilkins discovered the double helical structure of DNA we know today: a ground breaking milestone which expanded our understanding of how the DNA message can be read and copied. In the genetic theory developed in the 1950s and 60s, genetic information is encoded in DNA. Specific segments of DNA, genes, are copied into RNA which carries these instructions to the cellular machinery that makes proteins, which perform many essential functions within the cell. It is the latter that give rise to the phenotypes Mendel observed. Thus, the instructions encoded in the human genome influence our shape, size, sex, function and, importantly, dysfunction. This mechanistic understanding of genetics made altering genes to eradicate disease or create new phenotypes seem possible. Quickly writing,editing and designing life was, theoretically, now within the realm of science rather than science fiction. After the 1960s, genetic engineering progressed rapidly. By 1972, the first man-made [recombinant] DNA had been synthesized by joining together DNA from two viruses. By 1973, the first animal gene had been cloned- a segment of DNA from a frog was transferred into a bacterium, which then produced the frog protein. In 1976 Herbert Boyer founded the first genetic engineering company, Genentech, which used genetically modified bacteria to create the human hormone insulin. However, for many biologists the true potential of genetic engineering instead lay in editing genetic material in living humans. Gene therapy - editing, removing or replacing faulty human Summer 2019

genes - held the promise of eradicating genetic disease. Many systems for gene delivery were developed in the late 1970s and 80s to deliver DNA into cell nuclei. However, researchers struggled to find methods to control the location of gene insertion. This led to the possibilities of genes not integrating into the targeted cell’s genome at all or integrating but not being ‘read’ by the cell’s machinery to produce a protein, rendering the inserted DNA useless. Even worse, a wouldbe healthy gene might be inserted in the middle of another important gene, destroying the function of the working gene and creating more problems. In 1972, Theodore Friedman and Richard Roblin co-authored a paper entitled “Gene Therapy for Human Genetic Disease?” in which they envisaged that genetic diseases in humans could be tackled through the use of gene therapy. The paper itself is seen as a milestone in the field, although its main message is one of caution: “For the foreseeable future ... we oppose any further attempts at gene therapy in human patients because our understanding of such basic processes as gene regulation ... is inadequate” This feeling of caution was echoed at the 1975 Asilomar Conference, which established tight guidelines on recombinant DNA technology that were subsequently adopted by the American National Institutes of Health and Food and Drug Administration. Despite these technical and social oppositions, by 1990 the first human gene therapy trial was underway. Due to their ability to enter cells readily, researchers used viruses to deliver the desired DNA. In this case, the treatment temporarily replaced a defective gene in a four-year-old girl suffering from a rare genetic disorder, offering a semi-permanent cure. However, promising early trials became marked by tragedy. In September 1999, Jesse Gelsinger became the first person to die from a gene therapy treatment after volunteering for a clinical trial. In Jesse’s case, the viral vectors used to deliver the gene into his cells caused a severe immune reaction, resulting in his death four days after injection. In 2002, a clinical trial using gene therapy to combat immune deficiency in children was halted after one subject tragically developed leukaemia. The child was dubbed ‘gene therapy’s first cancer victim’. These deaths plainly signalled that our understanding of Friedman’s “basic processes” was still lacking. Focus

17


Despite these tragedies, gene editing techniques continued to improve throughout the 1990’s and 2000’s. By the mid 1990s, the next generation of gene therapy technologies were developed, promising safer and more effective future treatments. One of the first of these technologies was zinc-finger nucleases (ZFNs). Unlike previous imprecise techniques, ZFNs could recognise and cut specific short sections of DNA, massively increasing the probability that the transgene gets incorporated correctly at the site of the cut. Eventually viral vectors were also created in which the original viral genome was removed. This meant that the virus would be unlikely to reproduce inside the patient, reducing the chance of an adverse reaction. Furthermore, 2009 saw the discovery of transcription activator-like effector nucleases (TALENs) which could cut the genome with far higher specificity than ZFNs and be created in less time. With increasingly sophisticated tools capable of more precise genome engineering restoring faith in gene editing, by the early nineties, trials of gene therapy started to become more commonplace. By 2010, gene therapy trials were being carried out for diseases ranging from inherited retinal disease to HIV. Despite the use of germline editing in agriculture, the associated risks and ethical debate surrounding germline edits in a live human ensured that they remained firmly within biomedical labs. Despite colossal technical progress since the birth of genome engineering, by 2010 reliable, specific and safe editing of DNA was still a scientific dream. It is perhaps for this reason that the ethical and regulatory issues surrounding germline edits are somewhat absent in the scientific literature. This conversation changed dramatically, however, with the discovery that an ancient bacterial defence system called CRISPR could edit DNA. CRISPR ignited a fire which has since permeated modern molecular biology. From viral immunity to genome engineering – a bacterial immune system kicks off a new frontier in science CRISPR has become the shorthand for an elegant gene editing tool that has revolutionised the ability of scientists to alter DNA sequences. Discovered through interdisciplinary collaboration among microbiologists and structural biochemists, the CRISPR system is an ancient defence mechanism used by bacteria to fight off viral intruders. Cas9, the protein component of the CRISPR system, cuts DNA in a programmable fashion guided by a set of RNA instructions provided by the cell or, in the context of genetic engineering, by the researcher. RNA and DNA sequences can be designed and produced in a laboratory relatively cheaply and quickly, and it is easy to design an RNA sequence that targets a very specific region of DNA. This meant that CRISPR systems could be targeted nearly anywhere, in any genome; a groundbreaking leap forward in 18

Focus

Credit: Casey Atkins Photography, courtesy of Broad Institute

BlueSci speak with Professor Eric. S. Lander, the lead author of the recent call for a moratorium on human germline editing published in Nature. Eric Lander is the director of the Broad Institute at Harvard-MIT and his research focuses on understanding the human genome and its interaction with health and disease. BlueSci: Prior to the reports of the CRISPR-edited twins born last year, many researchers ha made statements against germline editing. Why do you think that these statements were not enough to prevent someone from carrying out gene editing unchecked? I was on that first committee in 2015 and we wrote 'it would be irresponsible to use it unless and until certain things happen', I think we all thought that meant WE SHOULDN’T USE IT UNTIL THESE THINGS HAPPEN. Calling it a moratorium or not wasn't really so salient in our minds. It’s not so much [about] a rogue actor in China because it’s not about how you prevent a rogue actor. That's a problem with countries enforcing their laws; we have laws against murder, people still commit murders, they get arrested and thrown in jail. The question is what laws do we want to have? It's not about He Jiankui in China, it's about the rules countries should adopt. What I think was interesting is that following the events in China with He Jiankui, many people were racing to get a regulatory path in place to approve this stuff and I think that's what shocked a lot of people. Like, wait a second, don't we need an international framework? This isn't just a regulatory question about how does the Food and Drug Administration [FDA] approve things. These are important questions and nations should be talking to each other. As we lay out in the piece, we need to have transparency, we need to have consultation. There may be differences but we want to establish an international dialogue and international norms. And I think the Hong Kong events [the Second International Summit] drove many of us to write this piece because we felt that, ignoring any particular rogue actor, there were some particular disagreements about what were the steps that should be taken before any particular application should be permitted.

Summer 2019


BlueSci: There are many who ask whether germline editing is inevitable and almost every reply says 'yes, it is'. Why do you think this happened and how was someone actually able to get to the point of doing this without anyone stepping in and saying 'woah, hold on'? I think this whole inevitability argument is on the whole very troubling, I think people feel very fatalistic that whatever can happen will happen. You know, I bet, if you go back after WW2 you'll find articles that say nuclear war between nations is inevitable – it hasn't happened yet, because people spent a lot of time thinking about an international governance framework. After Dolly the sheep got cloned, it was inevitable that we were going to clone human beings – hasn't happened yet. I think there is this natural tendency that we think anything that is scientifically possible will be adopted by society, I'm not so sure about that. We can trust our societies at being wise about these things. I don't think these are deeply thought out arguments. BlueSci: I agree, I think there is an attitude problem. Why aren't we stepping in to change this when we think it’s inevitable? I think a lot of folks were very uncomfortable with this [human germline editing]; I think it's proven difficult to figure out how to voice that. In some sense, the events in Hong Kong presented an opportunity to revive that question. Many of us who were uncomfortable said this is a good time to bring this point up again, we have the world's attention focused on the question. BlueSci: In your opinion what can the individual researcher do, the graduates, the post-docs, who work on these techniques every day, do to help affect change in a good way? Talk about it: simply recognising that as scientists we want to advance the frontiers of science but we want to do so to benefit society and that particularly younger scientists should be taking it as part of the job to discuss and to raise the ethical questions and expressing opinions. We're not all gonna agree but we all should feel like we are part of the conversation in a very responsible way. I believe the world will be better because scientists will engage in responsibility for our collective actions. We don't agree about many things in science but we can make progress toward common ground. I really like the idea that folks are grappling with hard questions. Many people need to get into the practice of talking about these issues; scientists often feel like it’s not their place to do that. If it’s not their place, whose is it? Scientists aren't just scientists: we are parents, and carers and spouses. We have to play our role, we bring special expertise and have to make sure to contribute it and to listen.

Summer 2019

technology over previous techniques. Scientists now had a site-specific and accurate method to induce changes in the DNA which promised to be simpler to implement and did not incur the time, expense and difficulty of manufacturing ZFNs or TALENs. Unsurprisingly, It didn’t take long for this system to be adapted for human genomes: by 2013 the lab of Feng Zhang reported that CRISPR was working in mammalian cells, igniting an explosive increase in its usage in answering a huge variety of biological questions. Nowadays, CRISPR is a commonplace technology utilised in many laboratories for purposes ranging from editing genes to help understand the role of specific proteins in the biology of the cell, to engineering disease models in mice to better understand disease. CRISPR has rapidly remodelled the research landscape and changed the way we approach biology for the foreseeable future. The dark side of genome editing... or is it a bright light? Every new advancement in genome engineering is accompanied by a debate around safety and ethics. With the advent of CRISPR, it rapidly became apparent that this technology could be used to edit human embryos with much greater ease than previous technologies. While the efficiency of editing with CRISPR is increasing, there remains a possibility for ‘off-target’ effects, a technical term for unintended changes to the genome outside of the targeted region. Some of these changes will do nothing; others may affect genes that influence cancer or other diseases, potentially with devastating consequences. As a consequence, the use of CRISPR in a clinical setting from a technical point of view has been subject to appropriate and highly necessary criticism. Beyond this, many people question the ethical and moral implications for the concept of our species and its evolution if we begin to edit the inheritable genome as we desire. At the extremes, there remains the anxiety surrounding ‘designer’ babies - offspring whose genomes have been engineered prior to gestation in order for the offspring to exhibit certain desirable traits. The scientific community largely agrees that the prospect of using germline editing to introduce cosmetic changes is still far off. A much more likely usage of germline genome editing is the correction of genetic disorders determined by mutation of a single gene, so called ‘genetic correction’. Disorders such as severe combined immunodeficiency and epidermis bullyosa, a rare and severe skin disorder, are perfect examples of diseases which seem appropriate for clinical approval, as replacing the broken copy of a gene with a functional copy would completely change the quality of life for the affected individual. When CRISPR-edited human embryos with the potential for transfer into the womb were first reported in 2015, the First International Summit on Human Genome Editing was called. The major conclusion of this meeting, attended by experts from across the world, was that it would be irresponsible to proceed with human genome editing until, at the least, safety and efficacy had been demonstrated, and only used in cases where gene Focus 19


therapy had been deemed sufficiently appropriate and necessary. Many deemed this statement a clear and unambiguous scientific consensus against the usage of these technologies in a clinical setting for the time-being. Fast-forward three years and a Chinese researcher named He Jiankui nervously approached the podium at the Second International Summit on Human Genome Editing. To a horrified audience, He revealed that his lab had edited the DNA of twin babies born late last year, permanently changing the genome of these children and their offspring to come. In doing so, He breached a myriad of ethical, moral and scientific norms surrounding human germline editing, reinvigorating the debate on human genome engineering, this time on a much more acute sense of urgency. In the field of genetic engineering, a distinction is often drawn between genetic correction and genetic enhancement, and this distinction often enters ethical discussions about the appropriateness of the use of gene editing technologies. Many scientists would agree that correction is more appropriate ethically. However, He aimed to make the twins resistant to HIV by disabling a gene called CCR5, which is important for HIV to gain entry into cells. Since there was no particular likelihood of the twins contracting HIV, despite their HIV-positive father, in the terms described above, He’s intervention is clearly genetic enhancement. The trivialness of the genetic changes in the case of the twins demonstrates why extreme caution should be exercised in utilising these technologies. CRISPR is not 100% efficient and consequently one of the twins still retains one functional copy of the CCR5 gene, and will therefore not be immune to HIV. Furthermore, while individuals who lack CCR5 do demonstrate resistance to HIV, some studies suggest its deletion may actually increase the likelihood of particular varieties of cancer as well as infections by tick-borne diseases and West-Nile virus. Furthermore, it has been demonstrated that the naturally occurring CCR5 variant provides protection against the most common strain of HIV but not against the less common X4 strain. Those who attended the International Summit remain unconvinced that whilst He successfully disabled the CCR5 gene there were no off-target effects. Therefore while the changes in the genome of the individuals concerned may protect them from infection to an extent, most experts agree that the risks far outweigh the potential benefits in this scenario. Moving on from CRISPR babies – a future for germline editing? He’s work crossed numerous ethical and legal lines. Following dismissal from his academic post, He likely faces criminal charges for forging ethical approval and allowing HIV-positive individuals to use reproductive technologies 20

Focus

Discovery of CRISPR by Francisco Mojica

2003

Function of CRISPR as a form of immune memory suggested by Mojica and Vergnaud

20032005

CRISPR, Cas9 protease and essential PAM sequence discovered by Bolotin

2005

CRISPR cascade as bacterial immune system hypothesized by Koonin

2006

Function of CRISPR as an adaptive immune system experimentally shown by Horvath

2007

Mechanism of CRISPR further elucidated by van der Oost, Marraffini and Sontheimer, and Moineau

20082012

CRISPR engineered to be used across species by Siksnys, Charpentier and Doudna

First CRISPR-mediated genome editing in mammalian cells by Zhang and Church

Report of the first CRISPR-mediated human germline gene editing by Jiankui

2011

2013

2018 Summer 2019


(this is against the law in China). What will happen in the wake of the ongoing national investigation is still unclear. However, there is a broad consensus that the Chinese government bears responsibility for the close monitoring and welfare of the children born from this treatment (allegedly a third is on the way). Indeed China has engaged in public consultation on new draft legislation regarding genome editing. More striking however is the international discussion provoked by the work presented in Hong Kong. Amongst the most notable was a call by Dr. Eric Lander and colleagues for a five year moratorium on germline editing to allow time for an international regulatory framework to be developed. Lander et al. propose that following the moratorium countries should commit to a three-step review process before any clinical usage of genome engineering. First, a country should distribute a public notice of intent to use human germline editing to allow for discussion of the pros and cons of doing so. Second, countries must engage in rigorous international consultation to satisfy the technical, scientific and medical concerns of the use of the treatment as well as justifying the societal, ethical and moral concerns. Lastly, countries must reach a broader societal consensus within the country in which it will take place on whether the application should proceed at all. This moratorium and model framework have been backed by a number of high-profile bodies, such as the National Institutes of Health, and has been the subject of recent WHO meetings. The latter reinforcing a halt on editing as well as introduction of a registry and canvassing of societal views on editing. Germline editing is currently illegal in over 30 countries, however in many of these countries approval has been awarded for the use of CRISPR technologies for corrective gene therapy in somatic cells such as those in adult tissues. In 2016, GlaxoSmithKline won ethical approval for Strimvelis, a touted cure for severe combined immunodeficiency due to a mutation in the gene for a protein called adenosine deaminase (ADA). Patients of this devastating and rare disease have severely impaired immune systems and cannot fight off infection. They must remain confined to a sterile environment, resulting in a poor quality of life and low life expectancy. In this treatment, the stem cells which give rise to important immune cells are extracted from the blood. Gene editing is then used to correct the broken copy of the ADA gene. When these cells are returned, the patient can then produce functional immune cells that retain the repaired gene for the rest of their life. While this treatment is only effective for a small number of patients, it represents a safe, ethical, and truly life-changing use of this technology. More recent reports detail that first trials of corrective gene therapy for treatment of

Summer 2019

two types of sarcoma and multiple myeloma, cancers of connective tissue and the blood respectively, have gained ethical approval in the US. Routine corrective gene therapy is likely fast approaching, and germline gene editing is, in the opinion of many experts, inevitable. It is clear that Pandora’s box is already open, the question is whether we want to try shut it again. If not, how do we see it becoming a part of our world? What does this mean for our experience as a species? Andrew Malcolm, Dominic Hall and June Park are 1st year PhD students at the Wellcome-MRC Stem Cell Institute. Artwork by Evan Hamilton IG: @youarelostbecareful

Focus

21


De-extinction and CRIPSR conservation

Adam Searle explores the science behind and social implications of bringing animals back from extinction.

Recent studies from the Church lab report breakthrough editing of over 13000 individual positions in the genome of a single cell using CRIPSR technology. This revolutionary approach sets the stage for mass editing of the genetic code of organisms alive and dead

22

What if extinction did not last forever? Would that be a good thing? Is it scientifically possible? Since the turn of the millennium, this question has been translated from the speculative world of science fiction into the world of science itself. De-extinction, sometimes referred to as resurrection biology, is the revival of extinct species through cloning. Extinction, as we have come to understand it, is forever. However, recent advances in genetic and cryogenic technologies have made the sequencing and cloning of extinct wildlife possible. The genomes of extinct animals found in the curated remains of museum collections, frozen in the tundra, or cryogenically preserved in genomic reserves, may be about to experience a life after death. The current alarming rate of extinction is a battle fought by conservation efforts. But are these succeeding? Sharply remarked by Peter Karieva and his colleagues at The Breakthrough Institute, if conservation is doing nothing but slowing down the rate of extinction we are failing. Amongst global environmental degradation, de-extinction has the potential to profoundly influence the objectives and rationalities of contemporary wildlife conservation. How does it, or might it, work? Synthetic biology and cloning technologies have

De-extinction

developed exponentially since the birth of Dolly the Sheep in 1996, giving rise to novel technologies enabling the addition, deletion, or alteration of single nucleotides: the building blocks of all DNA. More recently, multiplex automated genome engineering has up-scaled genetic engineering to allow the modification of entire genomes. These approaches to ‘genomic editing’ often utilise the tool CRISPR/ Cas9, which for the past decade or so has been generating a tremendous buzz in the scientific world and popular media. CRISPR, stands for clustered regularly interspaced short palindromic repeats, is a family of prokaryotic DNA sequences used by bacteria and archaea. Cas-9 is an enzyme associated with CRISPR sequences in many prokaryotic organisms used to memorise, identify, and cleave foreign DNA. The molecule carries a sequence of guide RNA (gRNA) to identify a predefined sequence of RNA, allowing the enzyme to work incredibly accurately, and unwind areas of DNA at specific intervals. Due to its incredible specificity, CRISPR/Cas9 is utilised by synthetic biologists to modify genetic code and introduce mutations. It is simple, relatively cheap, and its potential applications are staggering.

Summer 2019


Ancient DNA (aDNA) has the potential to be reintroduced into the living through techniques such as CRISPR. Hypothetically, the defining features of an extinct taxa’s genome could be subsequently edited into those of a similar extant taxon, which may be used to de-extinct an animal. George Church, a Harvard geneticist, is exploring just that. His fascinating work investigates editing Asian elephant genetic code to introduce genotypic traits of the mammoth’s genome. What follows is purely speculative: is the creature born of this CRISPR intervention a mammoth? Or a hairy elephant? We can’t know for sure. Besides, our understanding of the link between genotype and phenotype are limited at best. To understand this relationship in the context of CRISPR chimeras is exceedingly difficult. George Church’s lab most probably won’t resurrect the mammoth in its entirety, but they possess the possibility to reintroduce extinct functional traits reliant on ecological and evolutionary niches, regardless of their lack of empirical ethological and behavioural data. As described by Beth Shapiro in Genome Biology (2015), “Church’s team replaced 14 loci in the elephant genome with the mammoth version of those sequences. Although they have not yet created a mammoth, their success blurred the already fuzzy line that separates science from science fiction, bolstering hopes (and fears) that de-extinction, the resurrection of extinct species, may soon be reality.” The viability of de-extinction is heavily dependent on the nature of the taxon’s extinction. With headlinegrabbing projects like the mammoth, acts of genomic ‘tweaking’ are contingent on the availability of numerous factors, such as well-enough preserved DNA to provide a complete genomic reading. Methods in aDNA interpretation are developing at a rapid rate; the possibility of isolating protein fragments from dinosaur fossils was recently reported in Science. However, there are myriad factors between genomic sequencing and synthesis, and de-extinction. Jurassic Park will never happen, for good or bad, because there is no way of turning that genome into a living animal. With the mammoth, though, it’s another matter. The gestation periods, physiologies, and genotypes of elephants and mammoths are feasibly similar enough to facilitate the possibility of surrogate impregnation. The process of producing a mammoth-edited elephant genome would not by any means be easy, and would require over a million nucleotide changes. However this is not practically impossible. For a mammoth to be born, the next step would be an interspecies nuclear transfer to a surrogate elephant. Yet somewhat unsurprisingly, cell transfer within the elephant species has not been performed successfully to date, and such practice with endangered hosts raises a plethora of ethical concerns.

Summer 2019

As we are entering an epoch overwhelmingly defined by loss, the onset of contemporary mass extinction has stimulated pre-emptive genetic preservation projects such as the Frozen Zoo in San Diego. This is another form of de-extinction only applicable to the recently extinct. In the Frozen Zoo and similar projects, the genetic material of taxa on the verge of extinction are subject to cryopreservation, as a suspended animation of their extinction. Kept in liquid nitrogen, these genetic libraries function similarly to seed bank projects, to act as a last reserve. To date only one animal has been outlived by its own living material, the Pyrenean ibex, or bucardo. The bucardo is the only animal scientists have genuinely attempted to deextinct. In 2003, three years after the last bucardo died in the Spanish Pyrenees, a clone was born via surrogate pregnancy in a laboratory. Sadly, the clone died after just seven minutes due to a severe lung defect. Despite being seven short minutes, George Church remarked this day “a turning point in the history of biology”, for “all at once, extinction was no longer forever”. The strongest case for de-extinction lies in its ability to ‘right the wrongs’ of a species, to account for the environmental injustices of humanity, particularly as part of ecological restoration approaches to augment diversity. It would be unethical to bring a being into the world unless its survival was absolutely certain. Would one woolly mammoth clone mean that a species has been revived, without the intraspecific variation required to sustain a healthy population? The welfare of these animals must be ensured before, rather than after, the de-extinction process. On the contrary, the possibility of de-extinction may be a genuine cause for concern in conservation. Couldn’t the sudden availability of a ‘reset button’ engender ambivalence or a blasé attitude towards our current ecological catastrophe? Would it not perpetuate a conservative ‘business-as-usual’ scenario in the face of environmental destruction? Perhaps only time will tell. If de-extinction is to proceed, nevertheless, it should be with the utmost caution and continuous interdisciplinary scrutiny. It is an exciting and attractive movement within the scientific community drawing significant speculation from outside. Just as the media and popular representations of de-extinction should pay attention to the science, it is of grave importance that the de-extinction does not fall out of contact with the ‘nature’ to which it is engaging outside of the laboratory Adam Searle is a 2nd year PhD in the department of Geography and at the Institut de Ciencia Tecnologia Ambientals at the Universitat Autonoma de Barcelona. Art by Priti Mohandas. IG: @priti.moh

De-extinction

23


The first scientist was a woman Rosamund Powell recounts the history of women in science and the struggles faced then and today

While authors such as Shelley were incorporating scientific ideas into fictional tales, other women were incorporating fiction into scientific books in a bid to make them more engaging. This represented one way for women to access the world of science,

24

Given the current lack of diversity in the sciences, it seems unfathomable that the first person to be described as a 'scientist' was a woman. But she was. The word “scientist” was first used in 1834 in Whewell’s anonymous review of Mary Somerville’s On the Connexion of the Physical Sciences. It took some time to become generally accepted and replace the alternatives, such as “man of science” which pervaded language at the time. Its acceptance was slowed further as its first use, to describe Mary Somerville, adopted a jokey tone and many thought the word to be ridiculous and awkward to say. Nevertheless, the word eventually spread, and the first “scientist” is certainly recognised as such by modern standards with Somerville representing just one striking example of a woman succeeding in the sciences in the 19th century. There are a number of other female success stories in this same period whose contributions are often overlooked. Frankenstein had been published just nine years before Mary Somerville’s publication, and the text clearly demonstrates Mary Shelley’s scientific knowledge. Frankenstein is of course fictional, but it is also packed with scientific ideas from the period. Electricity captured the imagination of scientists across Europe and experiments were carried out on cadavers to the fascination of the public. Science was a fashionable topic and would have been debated frequently within Shelley’s intellectual circles. It is therefore unsurprising that such themes enter so heavily into her work. In fact, a significant quantity of scientific detail was cut out of Frankenstein before it ever hit the shelves. The new genre of science fiction was therefore born out of a woman’s ability to marry ideas from literature and the sciences. The relationship between women, science and fiction in the 19th century goes further, as described by Patricia Fara in Pandora’s Breeches. While authors such as Shelley were incorporating scientific ideas into fictional tales, other women were incorporating fiction into scientific books in a bid to make them more engaging. This represented one way for women to access the world of science, whose institutions

The First Scientist was a Woman

seemed impenetrable. For example, Priscilla Wakefield wrote An Introduction to Botany in 1816 and Jane Marcet explored a more traditionally male dominated conversation by writing Conversations on Chemistry. These books allowed women to explore scientific subjects in a way which was accessible to them. But this was not possible for all women. Many of these women ran in the same circles and were married or related to elite scientific men of the time. These relationships were critical in allowing them access to the world of science. Mary Somerville, the first “scientist”, was part of this group of women. She translated Laplace’s Celestial Mechanics, but went further to explain many of the equations in order to make the complexities of French physics more accessible to her audiences. This translation was widely used as a textbook and was studied by University of Cambridge students until the 1880s. The enormity of her achievements were eventually acknowledged as she became the first female member of the Royal Astronomical Society along with Caroline Herschel in 1835. These successes do, however, represent a minority. Most women were not able to contribute to the sciences in the 19th century. It is striking how many scientists of the past, both men and women, were from access essential private tutoring. Summer 2019


Even when women did succeed, their achievements were seldom recognised. For example, prior to becoming a member of the Royal Astronomical Society, Caroline Herschel’s discoveries received awards such as the gold medal from the Royal Society simply for assisting her brother William in his observations. This reflects a common pattern where the contributions of women were eclipsed by achievements of their brothers and husbands. This makes it hugely challenging to accurately assess the history of science and determine the extent to which women themselves contributed. Of course, huge strides have been made since the 19th century but there is still a remarkably long way to go. Even now only 22% of the core STEM workforce in the UK are women and only three women have ever won the Nobel prize in physics. As recently as 1974 Dame Jocelyn Bell Burnell, co-discoverer of radio pulsars, was excluded from the Nobel prize and it was given instead to her male colleagues. A leaky pipeline continues to mean that more girls and women are turned away from the STEM fields at every stage of their education, leaving very few women at the very tops of their fields in science. And it is not just women who lose out. Children from underprivileged parts of the country are also massively under-represented in science and tend to drop school science subjects sooner. Overcoming these gender and class gaps is critical to success in the sciences. Increasing diversity won’t just increase the number of discoveries made but is also essential to objectivity in the sciences. The lack of diversity shields certain theories from criticism and can undermine scientific credibility. Furthermore, a more diverse scientific community would have a better chance of addressing questions relevant to all members of society. The good news is that serious efforts are being made to tackle the problem. As pioneering women in STEM become increasingly visible at the top of their fields there are role models others can look up to. For example, Dr Katie Bouman played a prominent role in developing the algorithm used to photograph a black hole for the first time in April and has quickly become an inspiration to millions of aspiring women in STEM. It remains crucial to apply pressure on those at the top of their fields to address the imbalance. The number of women in STEM is increasing thanks to efforts such as the Athena SWAN initiative which presses universities to further the careers of women in science. Efforts in Cambridge such as CamAWiSE and the Cambridge University Women in Science society can also help to raise the profile of this problem, and with time the gender balance in the STEM workforce will hopefully be readdressed the upper classes. Women Rosamund Powell is a 2nd year Natural Sciences student at Jesus college and co-president of Cambridge Women in Science society (CamWIS). Images courtesy of Wellcome Collection. Summer 2019

The First Scientist was a Woman

25


In the search for a cure: iPSCs as a hope to find treatments for Marfan syndrome Debbie Ho explores recent advances in modelling Marfan Syndome using induced pluripotent stem cells

By adding specific chemicals at defined timings, researchers can instruct the iPSCs to turn into smooth muscle cells, the cells in your major blood vessels which contract or relax to control the diameter of the vessels to regulate blood flow

26

Austin Carlile is no stranger to living with a chronic disease. As the former lead singer of Of Mice & Men, he had a promising career in music. However, while pursuing his ambitions, in his early twenties, he started battling with Marfan syndrome and had to withdraw due to ongoing medical problems. Carlile is not alone, as Marfan syndrome affects 1 in 5000 people, and symptoms can range from mild to life-threatening. Marfan syndrome is caused by a mutation in a gene which encodes the protein fibrillin-1. Fibrillin-1 is major part of the extracellular matrix in the aorta (the major artery in the heart). Just like how cement holds bricks together, the extracellular matrix is like the glue and scaffold which holds our cells together. Therefore, it is no surprise that the aorta is weaker in patients with Marfan syndrome. Under high blood pressure, aortic aneurysms may happen, whereby the aorta ruptures. This can lead to fatal internal bleeding. At present, the treatment option available to prevent aortic aneurysms is by regular medical check-ups and, if required, openheart surgery to replace the injured part of the aorta. Since surgery is invasive and carries the risk of post-operation complications, it is desirable if patients can take oral medicines for their treatment as an alternative to manage symptoms. In search for a drug treatment, scientists have focussed on using mouse models of Marfan syndrome. These mice carry mutations in the fibrillin-1 gene and have features of Marfan syndrome. Based on these models, researchers found that treatment with losartan was able to treat the mice. These findings gave researchers to confidence to test the drug in clinical trials. Unfortunately, the drug was found to be ineffective in humans. This may be due to the fact that humans and mice respond differently to losartan.

In the search for a cure

To overcome this, it would be advantageous to use human cells as a disease model since to account for any species-specific differences. In addition, it is possible that the dosage at which losartan was used in the trial was low enough to be safe and avoid unwanted side-effects, but perhaps too low to be effective. Maybe there are other drugs that can also treat the disease, and can be combined with losartan to improve therapy. To address these challenges, at the University of Cambridge Stem Cell Institute, a research team led by consultant cardiologist Dr Sanjay Sinha is the first to use patients’ cells to create smooth muscle cells which mimic those in the aorta. First, skin cells from patients are “reprogrammed� to induced pluripotent stem cells (iPSCs). Next, by adding specific chemicals at defined timings, researchers can instruct the iPSCs to turn into smooth muscle cells, the cells in your major blood vessels which

Summer 2019


By doing further tests, the team found an unexpected role for a type of drug called p38 inhibitors for reducing smooth muscle cell death in Marfan syndrome. This suggested that p38 inhibitors can be used as a novel therapeutic approach for aortic aneurysms. Furthermore, since losartan and p38 inhibitors act on different aspects of Marfan syndrome, combining the two may prove to be more effective in future clinical trials. It is possible that additional drugs may be able to treat other aspects of the disease, and can also be added to the combinatorial therapy. Further research using this iPSC-derived disease model will be key to discovering these candidate drugs. Marfan syndrome is one among numerous diseases which can be modelled with iPSCderived cells. This is because iPSCs can also be turned into any other adult cell type in the body, including but not exclusive to cells found in the brain, heart, and gastrointestinal tract. In fact, research laboratories across the globe have been successful at modelling a myriad of diseases such as familial dysautonomia, familial hypertrophic cardiomyopathy, and polycystic liver disease. iPSCs prove to be a promising tool for drug discovery which is relevant to human biology and may usher in a new era of breakthroughs to establish novel treatments for currently incurable illnesses contract or relax to control the diameter of the vessels to regulate blood flow. Since Marfan syndrome is caused by a mutation, these iPSCderived smooth muscle cells also carry the same mutation and have the features of aortic aneurysms when grown in a plastic dish. These disease features are important as benchmarks for researchers to use for judging whether a drug is effective. Two key characteristics of Marfan syndrome in the aorta include the degradation of extracellular matrix and the death of the smooth muscle cells. When the iPSC-derived Marfan syndrome smooth muscle cells were treated with losartan, researchers found that there was a reduced extent of extracellular matrix degradation. However, losartan had little or no effect on smooth muscle cell death. This suggested that losartan was useful to treat some but not all aspects of the disease, which may explain why clinical trials using losartan have not been largely successful.

Summer 2019

Debbie Ho is a first-year PhD student at the Wellcome-MRC Stem Cell Institute. Image of smooth muscle cells: Prof Giorgio Gabella. CC BY Image of Aorta: Henry Gray, CC BY.

In the search for a cure

27


Sol Spiegelman to the 2018 Chemistry Nobel Prize: Evolving Frankenstein’s Monsters Lemuel Szeto explores the chemistry behind directed evolution, and how we can use it to birth useful monsters

Directed evolution is also used in artificial intelligence research. Evolutionary algorithms run many candidate solutions to some optimisation problem, and in each simulated 'life cycle' only the 'fittest', the best performers, are carried forward, and their traits mixed or 'mutated'

28

Evolution is not unique to biology: in fact, it is deeply rooted in chemistry. In 1965, this was elegantly demonstrated in a chemical system by the American biologist Sol Spiegelman. Spiegelman introduced a viral RNA genome, 4,500 bases in length, into a solution containing a viral enzyme called RNA polymerase that replicated the RNA in the presence of excess nucleotide building blocks. He then transferred the replicated RNA into another identical solution for further replication. After repeating this procedure 73 times, with the period between successive transfers decreased, Spiegelman discovered that the remaining RNA was only 218 bases long, representing only 5 % of the original 4,500 bases. How could this be? This shortened RNA, which Spiegelman named The Monster, was found to be so efficiently replicated by the polymerase that the rate could hardly be improved. The 218base sequence was predicted to contain nine hairpin loops formed via Watson-Crick base pairs that proved essential for polymerase recognition. Spiegelman knew that the polymerase was imperfect, and that replication errors would result in truncation. These truncated RNAs, if in a viral genome, would be crippling, as essential genes for viral protein coats would be missing; however, this would be inconsequential in an environment where the RNAs were relieved of such protein expression responsibilities. In addition, Spiegelman gave each generation very little time to be replicated. He deduced that in response to this selective pressure, dispensable sequences would be eliminated whilst essential recognition sequences would be retained, giving rise to shorter and more streamlined replicative ‘machines’ in a process analogous to natural evolution. Spiegelman’s RNA was not the first ‘monstrosity’ to have been created by artificial selection, for humans had been selectively breeding livestock, crops, and pets for centuries. However, this marked the first instance where molecules, instead of organisms, were shown to be evolvable. In the following years, numerous experiments on RNA and its cousin DNA further demonstrated their potential for the in vitro evolution of desirable characteristics and generated hordes of ‘monstrosities’, often with novel functions and properties. In the late 1990s, a 15-base DNA molecule was evolved in vitro to bind an unusual target, the blood-clotting protein thrombin. Such binders, which exhibit high affinity for specific targets, are called aptamers, and there is clinical potential for the thrombin-binding aptamer to act as an anticoagulant. In addition to binding unconventional protein targets, aptamers were also evolved to bind heavy

Directed Evolution

metals and small molecules. Upon binding, most aptamers can produce quantifiable signals, and therefore have the potential to act as biosensors. In the present day, aptamer-drug conjugates are being also developed for use as highly specific drug delivery agents. Selective binding to proteins only exhibited on the surface of cancer cells stimulates aptamer uptake by the cell and subsequent drug release. Recent technological innovations, such as the incorporation of chemically modified nucleotides and next-generation sequencing methods, have expanded the scale of explored sequences and accelerated the aptamer evolution process to unprecedented levels. Since the 1980s, potential for in vitro evolution was found in another major class of polymeric biomolecules: proteins. Protein functionality is determined by its structure, and its structure is primarily determined by the underlying sequence of 20 amino acid building blocks. The amino acids polymerise by forming peptide bonds and are coded for by the gene DNA sequence. Unlike DNA, which heavily utilises base pairing, amino acids do not pair, and thus their intrinsic flexibility yields an astronomical number of possible structures and conformations. Biologist Cyrus Levinthal once approximated that each of the two peptide torsional angles, which are parameters that describe the rotation of the polymeric backbone about certain bonds, can generally take three values, producing nine possible conformations for each peptide bond. For a given sequence of 100 amino acids, which corresponds to the size of a small protein, this gives 999 - 1094 possible structures, only a fraction of which are functional. In addition, 20 amino acids can be incorporated at each position, in contrast to only four for DNA, producing 20100 ≈ 10130 possible sequences (that may each adopt unique structures). Rational design of functional proteins from the sequence level was declared near impossible and still is today, despite modern computational power. Thirty years ago, the traditional synthesis of most everyday chemicals such as polymers or pharmaceuticals usually required harsh or toxic conditions. Engineer Frances Arnold proposed a greener alternative: enzymes that could catalyse similar reactions with high efficiency and therefore synthesise the same products. The problem was that these enzymes did not exist in nature, and it was impossible to design new, functional enzymes from scratch. In 1993, Arnold had a stroke of genius, conjuring a solution in the form of in vitro evolution. She tested her idea on subtilisin, an enzyme that cleaved the milk protein casein and was found to naturally work in water but not in organic solvents Summer 2019


such as dimethylformamide (DMF). Starting off with a subtilisin sequence in which there were four single mutations, Arnold introduced further random mutations, generating tens of thousands of mutants. The mutants were then allowed to cleave casein in the presence of 35% DMF. The sequences of the mutants that cleaved casein more effectively than the original were isolated and subjected to three further rounds of mutation and screening, with the DMF concentration increased in each. The final screen revealed a variant with six additional mutations that exhibited 256-fold higher activity than the original enzyme in 60 % DMF. This was a remarkable feat, considering that subtilisin contained 381 amino acids. Even for a small protein containing 100 amino acids, there would be 1,900 possible single mutants and more than 100C2 × 192 ≈ 1.7 million possible double mutants. It would have been impossible to predict in silico which precise mutations would have led to the optimised subtilisin. The door to protein in vitro evolution was opened and the scientific community was stunned. Since then, Arnold has managed to evolve enzymes that have replaced numerous conventional catalysts in industrial and pharmaceutical use, from drug synthesis to polymer and biofuel production. Parallel to Arnold, another related technique called phage display was being developed by the American biologist George Smith and adapted by Cambridge’s Gregory Winter. The duo was primarily interested in antibodies, proteins secreted by immune cells that can recognise and bind molecules called antigens in a highly specific manner. Numerous diseases are associated with malfunctioning proteins, and scientists had long been hopeful for therapeutic antibodies to target them. However, producing these therapeutic antibodies was simpler in theory than in practice. Antibodies are conventionally harvested from animals following the injection of antigens; however, many antigens were either toxic or failed to elicit a sufficiently strong immune response. Additionally, animal antibodies could not be administered to humans, as the human immune system would recognise them as foreign and mount an immune response. The complexity of protein structures made designing therapeutic antibodies from scratch impossible. Smith realised that for effective selection of proteins, a strong physical connection must be established between the DNA sequence and the protein's functionality. He found an appropriate ‘coupler’ in the form of bacteriophages (phages), viruses that infect and replicate in bacteria. In 1985, Smith demonstrated that by joining protein-coding sequences to genes encoding phage coat proteins, the phages expressed and displayed the proteins externally. By forming a library from millions of phages, with each containing different sequences, selection could be performed on the population of displayed proteins. In 1990, Winter took this a step further by displaying millions of random single-chain variable fragment (scFv) sequences, sequences that encoded antibody fragments responsible for their specificity. By exposing the phages to immobilised hen lysozyme, some phages bound lysozyme tighter than others; the phages that bound weakly, or not at all, were removed upon washing. Genome sequences of the phage binders were isolated, randomly mutated, and subjected to another round of lysozyme binding. After several repeated cycles, the remaining phages all contained scFv with high specificity for hen lysozyme but not lysozyme from humans or other birds. Winter had successfully evolved the first fully functional antibody fragment. Although these scFv fragments overall were still generally restricted by human immunological tolerance, small loop-like regions within the scFv that made physical contact with the antigen did not trigger an immune response. These Summer 2019

regions, called complementarity-determining regions (CDRs), could simply be joined with the rest of the human antibody scaffold to yield chimeric yet truly 'humanised' antibodies. The significance of these experiments was immediately realised. Phage display could utilise any antigen in the selection procedure, allowing the evolution of antibodies with novel characteristics, such as the ability to recognise unnatural targets. In the years that followed, Smith and Winter, along with many others, adapted and finetuned the technology. One of the very first therapeutic human antibodies, adalimumab, made its grand entrance in 1994 and was approved for clinical use in 2002. Adalimumab binds and inhibits a pro-inflammatory protein called tumour necrosis factor-alpha (TNF-alpha), serving as an effective treatment for rheumatoid arthritis. As of 2017, phage display has produced more than 60 antibodies and antibody-based drugs, many of which are currently in clinical trials for the treatment of various autoimmune diseases and cancers. From 2012 to 2016, adalimumab alone has generated a global revenue of more than USD 16 billion, the highest ever recorded for any biopharmaceutical. The field of molecular in vitro evolution made the leap from evolving simple RNA replicators to ambitious therapeutic antibodies in a span of five decades, eventually culminating in a 2018 Chemistry Nobel being awarded to Arnold, Smith, and Winter for their pioneering contributions. The Nobel seems to be a fitting conclusion, but for the field this is just the beginning, with vast applications to real-world problems still waiting to be explored. Frankenstein’s Monster may be fictional, but Spiegelman’s Monster, and its descendants that come in all shapes and sizes, are very real. We have barely scratched the surface of how many ‘monstrosities’ we can make Lemuel Szeto is a 2nd year Natural Sciences student at Fitzwilliam College/ Art by Joseph Jones

Directed Evolution

29


PAV I L I O N BlueSci explores the implications of novel biotechnologies such as CRISPR with artist Emilia Tikka Q: Your recent work Æon, and previous work Eudaimonia, focus largely on the application of CRISPR to near-future enhancement of the human body. With the former focusing on reversal of ageing and the latter on the quest for the ultimate happiness through a process of physiological and psychological optimization. I’m curious as to how you first became interested in using CRISPR to explore the relationship between synthetic biology and the human condition? A: I have been always interested in questions on how novel (bio)technologies might re-shape human bodies in conceptual and physical senses. In recent years, I got particularly interested in CRISPR since the technology will challenge these biopolitical questions on the level of genetics. Eudaimonia

30

Pavilion

Summer 2019


Q: The two pieces appear somewhat to blur lines between utopia and dystopia; both telling a different narrative on the same issue. I wonder if you can comment on this in the context of today’s society with the recent reports of CRISPRedited babies in Hong-Kong and the rapid introduction of CRISPR-driven cell therapies? With your art do you aim to issue a statement of caution or do you believe more in a more fatalistic implementation of these technologies in the manners that you explore?

Q: The theme of this issue is to explore ways in which human beings are subverting the natural world for our own advancement and trying to understand the impact this had made. We’ve explored what it means philosophically for our species to be able to edit our own genome and the genomes of other species at will. What is your opinion on what the ability to edit our genes as we desire, and reverse ageing by genetic reprogramming, means for the concept of humanity? Could you comment on how your work might explore this?

A: As a critical/speculative designer, my work focuses on contemporary (bio-)technologies and phenomena shaping collective and individual futures. In critical and speculative storytelling, it is important to avoid simplistic dystopia/ utopia settings and rather aim to reveal the complexity of these questions. The idea is not to propose a “solution” or a “warning” but rather to open up underlying important societal perspectives and philosophical questions that are often not discussed when it comes to technologies such as CRISPR.

A: I will be researching more on these questions in my current PhD project, focusing on how CRISPR is challenging the philosophical concepts of human genealogy and heredity. My aim is to sketch alternative stories opposing the narrow concept of “optimizing”. I am currently very inspired by Donna Haraway`s Camille Stories, where she speculates on a “multispecies” future.

Æon

Emilia Tikka is a designer and artist – originally from Finland but based in Berlin. Instagram: @emiliatikka. All photos by Zuzanna Kaluzna. Summer 2019

Pavilion

31


Weird and Wonderful ‘Night Vision without the Carrots’ NIGHT VISION GOGGLES, and even carrots, may become a thing of the past with a study showing that nanotechnology implanted in the eye allows mice to see infrared light. Infrared radiation, emitted as objects give off heat, has a longer wavelength than visible light and thus is not normally perceived by mice and humans. The teams at the University of Science and Technology of China and the University of Massachusetts Medical School created nanoparticles that transduce infrared radiation by absorbing it and emitting shorter green light wavelengths. They injected these into the eyes of mice where they bound to photoreceptor cells via an attached protein. The pupils of such mice constricted compared to controls, and they were able to distinguish shapes in maze tasks that involved infrared and visible light. The effects lasted for up to 10 weeks with few side effects. The researchers believe that such technology could work in human eyes, ideally via eye drops. They think that it would be of interest to the military and security industry in replacing night vision goggles which Han states ‘need batteries’ and are ‘very heavy’, but Xue adds that it could also provide ‘therapeutic solutions in human red colour vision defects’ LM

Hacking into the Coral Reefs CORAL REEFS, home to a cornucopia of unique species, have been severely threatened by the anthropogenic climate change. From 2014 to 2017, heatwaves and acidification eradicated half of Australia’s Great Barrier Reef corals. As traditional conservation biology has mainly addressed the external threats imposed on the corals, such as water pollution, a number of scientists all over the world have adopted an approach from within -- a daring idea of rescuing the coral reefs by designing strains resistant to heat and acidity, through selection, cross-breeding, and genome manipulation. At the National Sea Simulator near Townsville, Australia, the laboratory of Madeleine van Oppen is crossing coral species which are naturally prevented from interbreeding due to their different spawning times. Some of the hybrids have shown a promisingly better resistance to heat and acidity than their parent strains. This March, the team has introduced such hybrids into the wild, set out to monitor their survival and growth in the coming months. Although the introduction of engineered strains into natural ecosystems is still approached with caution by both authorities and researchers, this seemingly radical notion might be the only viable solution as the magnitude of environmental problems is rapidly being unravelled AE

32

Weird and Wonderful

Laws are made to be Broken COUNTLESS NEWSPAPERS HAVE been ravaged by a catchy, yet vastly misunderstood story of physicists ‘reversing time’. For all of you digging out your Sports Almanacs I’m sad to say you’re going to be disappointed. So, what did they do? In the quantum world, particles such as electrons can be described by wavefunctions, a mathematical description which can be used to approximate the probability of finding the particle. As time evolves these wavefunctions become bigger (the area over which it’s likely to find the electron grows) as described by the equations of QM. However, as with most physics equations they work just as well backwards! That is if you could record the events and play them backwards, the equations that described the forward direction would work just as well! No new physics needed! Researchers at the Moscow Institute of Physics and Technology wrote a program for the IBM quantum computer to carry out an operation that made the quantum system behave ‘backwards’. As if time was evolving backwards. This involved forcing the 2 qubits (similar to a compute bit but weirder) back into their original state after their states had degraded. This was done with an 85% success rate, and is hopefully going to allow error checking within other quantum algorithms and make future quantum computers more precise. Time machines have not been invented, but instead a much more useful step into the future with quantum computers JR

Summer 2019




Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.