The Cambridge University science magazine from
Cambridge University science magazine
FOCUS
Lines of Communication
Lent 2014 Issue 29 www.bluesci.org
Ancient Genomes . Sensation . Water Shortage Teaching Evolution . Virus Sculpture . Mo Costandi
T4444 444 threeeeeeeeeeeeee e e Research Posterrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrr r r r r r r r r r r r r r r r r r rrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrr rr r r Impact Posterrrr rrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrr r r r r r r r r r r r r r r r r r rrrrrrrrr rr Research Imageee eeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeee
T44 444444 44444444 e44 444 4r44b444 44 444 4444444 44b444
r4444 4444444 4444 b4 44r4444 44 4 44444 4444 N4b44 L4444444 44 V4444 R44444444444 T44 44444444 e44 444 4444444 44 4343r 44 444 23rd February 2014 44444444444444444444444444444444444444444 4444444444444444444444444444444444444444444444444444444444 T44 4444444444 444 4444444444 444444444 b4 L4444444444 444 4444444 r4444444
Lent 2014 Issue 29
Cambridge University science magazine
Features
Regulars
6
Genomes from Beyond the Grave
8
Making Sense of the Senses
10
Bubbles of Trouble
12
Finding a Cure for Black Bone Disease
14
Tapping into New Water Sources
16
Contents
Charlotte Houldcroft discusses the search for ancient pathogen genomes
On the Cover News Reviews History
Toby McMaster explains how the quest to understand hormones changed our appreciation of sensation
Sophie Harrington revisits the case of the Scopes Monkey Trial
Science and Policy
Robin Lamboll explores how different organisms have coped with life in boiling water Wing Ying Chow explains how possible treatments for a rare disease might be found Digory Smith discusses the issue of water shortage and the technology employed to meet demand
FOCUS
Maria Masarenhas discusses the issues surrounding Pre-implantation Genetic Diagnosis
Arts and Science Matt Dunstan explores the complex interplay between language and numbers
Behind the Science Sarah Smith considers the impact of the founding mothers of Computer Science
Perspective Martha Stokes considers cost and conservation, and why sustainability pays off
Communication... BlueSci explores how the natural world communicates, from single cells to the birth of the digital era
Pavilion Ashley Wilson describes how Luke Jerram creates glass sculptures of viruses
A Day in the Life Sergio Lainez Vicente catches up with Guardian science writer Mo Costandi
Weird and Wonderful About Us... BlueSci was established in 2004 to provide a student forum for science communication. As the longest running science magazine in Cambridge, BlueSci publishes the best science writing from across the University each term. We combine high quality writing with stunning images to provide fascinating yet accessible science to everyone. But BlueSci does not stop there. At www.bluesci.org, we have extra articles, regular news stories, podcasts and science films to inform and entertain between print issues. Produced entirely by members of the University, the diversity of expertise and talent combine to produce a unique science experience.
3 4 5 22
24
25
26
28 28
29
30
32
Committee President: Nicola Love ���������������������������������������������������������������� president@bluesci.co.uk Managing Editor: Sarah Smith ����������������������������������������� managing-editor@bluesci.co.uk Secretary: Beth Venus ���������������������������������������������������������������� enquiries@bluesci.co.uk Treasurer: Robin Lamboll �������������������������������������������������������� membership@bluesci.co.uk Film Editors: Alex Fragniere ������������������������������������������������������������������ film@bluesci.co.uk Radio: Anand Jagatia ..................... ��������������������������������������������������� radio@bluesci.co.uk Webmaster: James Stevens ���������������������������������������������������� webmaster@bluesci.co.uk Advertising Managers: Philipp Kleppmann & Deirdre Murphy ���� advertising@bluesci.co.uk Events & Publicity Officer: Martha Stokes �������������������������������������� events@bluesci.co.uk News Editor: Joanna-Marie Howes ��������������������������������������������������� news@bluesci.co.uk Web Editor: Aaron Critch ���������������������������������������������������������� web-editor@bluesci.co.uk
Contents
1
Issue 29: Lent 2014 Editor: Elly Smith Managing Editor: Sarah Smith Business Manager: Michael Derringer Second Editors: Camilla d’Angelo, Shirin Ashraf, Aaron Critch, Helen Ewles, Catherine Griffin, Sophie Harrington, Alissa Lamb, Nicola Love, Jannis Meents, Greg Mellers, Maire Payne, Emily Pycroft, Jeremy Schwartzentruber, Tam Stojanovic, Kerstin Timm, Chris Tsantoulas, Koi (Arporn) Wangiwatson Copy Editors: Ornela De Gasperin Quintero, Nicola Love, Nicole Rossides, Sarah Smith, Tam Stojanovic News Editor: Joanna-Marie Howes News Team: Camilla d’Angelo, Ornela De Gasperin Quintero, Nele Dieckmann Reviews: Toby McMaster, Sarah Smith, Marinka Steur Focus Team: Shirin Ashraf, Ornela De Gasperin Quintero, Ana Leal Cervantes, Nathan Smith, Tam Stojanovic Weird and Wonderful: Ornela De Gasperin Quintero, Elly Smith, Nathan Smith Production Team: Ornela De Gasperin Quintero, Philipp Kleppmann, Nicola Love, Jannis Meents, Benjamin Schilperoort, Beth Smith, Nathan Smith, Sarah Smith, Tam Stojanovik, Chris Tsantoulas Illustrators: Josie Best, Alex Hahn, Emily Pycroft, Elly Smith Cover Image: Mubeen Goolam
ISSN 1748-6920
Varsity Publications Ltd Old Examination Hall Free School Lane Cambridge, CB2 3RF Tel: 01223 337575 www.varsity.co.uk business@varsity.co.uk This work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported License (unless marked by a ©, in which case the copyright remains with the original rights holder). To view a copy of this license, visit http://creativecommons. org/licenses/by-nc-nd/3.0/ or send a letter to Creative Commons, 444 Castro Street, Suite 900, Mountain View, California, 94041, USA.
2 Editorial
Communication is key... As a species, much of our existence is defined by the way we share information. From casual weather-chat amongst strangers on a train, to those momentous exchanges between teacher and pupil, novelist and reader, even musician and listener, communication allows individuals to contribute to or plunder the riches of our collective intelligence. Today, the discourse is global. Not only can we weather-chat across continents, but thousands of University courses are available online for free, and it is possible to self-publish from a smartphone. Much attention has recently been drawn to the fact that areas of science appear untouched by this revolution. With journals reluctant to publish ‘unexciting’ results, and researchers often unwilling to share original datasets or detailed methods, a wealth of information remains inaccessible. This crisis of communication has in part been attributed to the pressure to advance new and fashionable theories. This leaves little room in print for sound observations that may disprove hypotheses, or leave current conclusions uncertain. The importance of providing an equal platform for all good scientists to have their say has never been more important. Taking communication as our broad theme, in this issue of Bluesci we discuss the beauty of the language of numbers, examine how teaching human evolution in schools became legal, and delve into the mysteries of the body’s own information collection system: the senses. We go back in time to look at the extraordinary histories of the women who pioneered the digital revolution, and up to the moment as we quiz science journalist Mo Costandi about his career. In the FOCUS article, you’ll find details of the phenomenal diversity of communication systems, both natural and artificial, from phosphorescent bacteria and dancing bees, to top secret codes and the subtleties of a glance. Of course, other topics have also caught our interest; we make the economic case for conserving biodiversity, discuss the global water shortage, and reveal how the physics of boiling water has consequences for an astonishing variety of species. Hardly pausing for breath, the quest for a cure for rare black bone disease is pursued, the pros and cons of genetically testing embryos weighed up, and the surprising world of archeaological genomics unearthed. At BlueSci, we are keen to see your passion for communication! There are plenty of ways to get involved in the next issue, including writing, editing, illustrating and producing. If any of this takes your fancy, don’t hesitate to send us a message.
Elly Smith Issue 29 Editor
Lent 2014
Discovering Cell Fate Mubeen Goolam discusses the advancing technology used to generate this issue’s cover image The Cambridge University science magazine from
Cambridge University science magazine
FOCUS
Lines of Communication
Lent 2014 Issue 29 www.bluesci.org
Ancient Genomes . Sensation . Water Shortage Teaching Evolution . Virus Sculpture . Mo Costandi
Rick Ey?
Mice are an extremely useful model for studying mammalian development
Lent 2014
Every single organism, no matter how large
or complex, began its existence as a single cell. It is truly startling to think that we were all once about the same size as the full stop at the end of this sentence. From this tiny, microscopic fertilised egg (known as a zygote in mammals) we are able to divide and develop and form every single organ and tissue in the body. Understanding this process, how a single cell is able to pattern itself into a complex organism, has been the subject of a staggering amount of research over a number of decades. This is at the very heart of developmental biology. We want to know how one cell develops into millions of different organised structures. The very first pattering events in mammalian embryos occur over the first few days of development. During this time it is critical that the embryo establishes the correct morphological and molecular events necessary to allow the embryo to implant into the uterine wall. Once the embryo is adhered to the uterus, the mother is able to provide oxygen and nutrients to the developing foetus while at the same time removing waste products and carbon dioxide. The pre-implantation period of the embryo is thus a short but critical stage of development. In order for implantation to occur, the cells of the embryo need to spatially segregate and differentiate into three distinct cell lineages with the differing abilities to become specific tissues. This spatially segregated structure is known as a blastocyst and occurs through two cell fate decisions. The first cell fate decision gives rise to the trophectoderm, a supporting tissue which will go on to form the embryonic portion of the placenta, and the inner cell mass. The placenta connects the developing foetus to the uterine wall. The second cell fate decision is responsible for segregating the inner cell mass into the pluripotent epiblast, which will go on to form all the cells of the embryo proper, and the second supporting tissue, the primitive endoderm, which gives rise to yolk sac which acts as a circulatory system for the foetus. The image on the cover of this issue of BlueSci shows a group of cultured mouse embryos that have developed until the blastocyst stage, just before implantation into the uterine wall. It was taken by Mubeen Goolam in Professor Zernicka-Goetz’s lab in The Gurdon Institute at the University of Cambridge. The embryos were stained with fluorescent markers capable of binding different cell-type specific proteins
which, in turn, allows the identification of different cell lineages present in the blastocyst. The blue cells are the trophectoderm, the white cells are the epiblast, and the pink cells are the primitive endoderm. The image also shows that a number of the blastocysts appear to have ‘burst’ and have regions where the cells are ‘spilling out’ of their original boundary. This is expected and is not an artefact of the staining procedure. These blastocysts are ready to implant into the uterine wall and thus have burst out of their membranous boundary. This is evidence that they have developed perfectly while in culture. Mice are the perfect model to use to study mammalian development as their small size, short gestation period, and the ease of handling them allows one to do a large volume of research in relatively efficient amount of time and space. Additionally, mice are highly similar to humans on a genetic and physiological level and have a very well documented genome that is easy to manipulate and analyse. We can thus use mice to understand human development. While mice as a model organism have existed for some time, it is the recent developments in microscopy and our ability to culture these embryos in the laboratory that have allowed us to probe deeper into the events that occur during early mammalian development. We now have the ability to create high resolution time lapse images that allow one to visualise the very process of development as it occurs. We can literally watch development as it takes place, something which would have been impossible to imagine a few years ago. Our ability to mark and identify cells with different cell fates means that we can create images that are not only visually beautiful, but that have biological meaning too. The advancements in technology we have seen in a short space of time will allow us to explore and examine the very first stages of development in a way that would have been impossible only a few years ago. Who knows what we might discover? Mubeen Goolam’s image was one of three winning entries in the University of Cambridge Graduate School of Life Sciences (GSLS) image competition. The competition, run during the Cambridge Science Festival, showcases the variety of biological research in Cambridge. To find out more, visit the GSLS website. The other winning images, taken by Nuri Purswani and Xana Almeida, can be found on www.bluesci.org. On the Cover 3
News
Check out www.bluesci.org or @BlueSci on Twitter for regular science news and updates
A NEW STUDY has
suggested that the immune system of a newborn baby regulates itself to allow the colonisation of beneficial bacteria. When foetuses are developing inside their mother’s womb they are in a sterile environment. When they are born, however, they immediately become colonised by bacteria and fungi. The immune system of infants has a very weak response to infections, and it has been assumed that this was due to a lack of maturity. A group of researchers at Cincinnati Children’s Hospital in Ohio has challenged this view by comparing the immunosuppressive properties of newborn and adult mice. Surprisingly, baby mice had a higher concentration of CD71+ cells. These cells generate an enzyme called arginase-2, which actively suppresses their immune response. To understand this, Sing Sing Way and his team experimentally knocked these cells out by providing baby mice with antibodies. Afterwards they infected them with Listeria monocytogenes, a bacterium that can cause severe infections, and found that their immune system successfully resisted the attack. But there was a trade-off; as the colonisation of commensal microorganism in the baby’s intestinal cells produced an inflammatory reaction, a process that would have otherwise happened smoothly. The challenge now is to understand if a similar process happens in humans. If so, treatments that allow a temporal reduction of CD71+ cells could strengthen the immune system of newborns. This may allow health workers to vaccinate sooner, which could save many lives in developing countries. ODG
Giant channels beneath Antarctic ice ANTARCTICA IS COVERED in a
vast ice sheet that holds close to 90 per cent of the Earth’s freshwater reserves. However, climate change means that the potential threat of melting Antarctic ice mass to the globe’s ocean levels has become a growing concern. The Antarctic ice sheet is a dynamic system where glaciers continuously push towards the sea. As they move into the ocean, they form ice shelves that float on the water whilst maintaining their connection to the mainland ice sheet. Scientists have used satellite imagery and radar measurements to gain crucial new insights into meltwater flow beneath Antarctica’s ice sheets. They showed that large channels run underneath a major ice shelf and that these are extensions of channelised meltwater flow underneath the ‘grounded’ ice sheet that rests on the land. These findings differ from previous models that suggested water flows in a thin layer beneath the ice sheet. Analysis of the architecture of floating ice shelves could provide key information on the organisation and stability of the water system beneath the ice mass on the continent’s mainland. Dr Anne Le Brocq from the University of Exeter stated: “The information gained from these newly discovered channels will enable us to understand more fully how the water system works and, hence, how the ice sheet will behave in the future.” The findings will help improve existing models as environmental conditions continue to change. ND GEORGES NIJS
JON OVINGTON
Babies suppress own immune system
Getting drunk without the hangover IMAGINE GETTING DRUNK at a party and
KAREN O’D
then being able to drive home the same night, and with no hangover the next day. This may sound like science fiction, but scientists are developing a drug that may do just that. David Nutt, from Imperial College London, has identified five potential compounds which could provide a synthetic alcohol substitute that mimics the positive effects of alcohol without the health risks, danger of addiction or hangover. Alcohol is known to mimic GABA, an inhibitory chemical produced in the brain. The brain contains multiple GABA receptor types, each with different functions. Alcohol non-selectively 4 News
binds to GABA receptors, causing many side effects such as memory impairment and loss of coordination. Unlike alcohol, the new drug selectively targets GABA receptor subtypes responsible for the pleasurable and relaxing effects of alcohol. What’s more, it would also come with an antagonist that can rapidly reverse its effects, thus allowing revellers to sober up quickly. Alcohol addiction and alcohol-related health problems, violence, and accidents are collectively responsible for 2.5 million deaths worldwide each year, so a new safer alternative to the drug is surely to be welcomed. Professor Nutt is looking for investors to develop the drug, but it is unlikely it will be on the market any time soon, considering the regulatory challenges facing new pharmaceuticals. It remains to be seen whether popping a pill could ever replace the ritual of enjoying a drink with friends. CD Lent 2014
Reviews
Gravity: Starring Sandra Bullock and George Clooney
GRAVITY IS A visually spectacular thriller about two astronauts who become separated from their space
Released 4/10/2013 Rated PG
shuttle Explorer, after space debris collides with it. There is little character development of Ryan Stone (Sandra Bullock) and Matt Kowalski (George Clooney) and the premise of the film is quite simple - the pair struggle against the harsh environment to make it back to Earth. The idea may be simple but the logistics of making a realistic zero-gravity film are not - at $110 million the budget of Gravity exceeds India’s first mission to Mars that launched last November ($73 million). Watching the movie unfold, you cover your mouth and shake your head as you realise that Dr Stone is having the worst day imaginable. She has to contend with depleting oxygen levels, fires on board the space station, and unfamiliar Chinese characters on the operations dashboard. Oh, and she runs out of fuel. It definitely makes missing the bus look pretty pathetic. Gravity does a great job of portraying the fragility of human life in space and exploring how terrifyingly lonely space travel can be. Perhaps Sandra Bullock’s hair doesn’t float freely in zero-gravity and maybe it’s unclear how the Hubble Telescope (350 miles up), and the International Space Station (230 miles up) in sight lines of one another. However, I think we can forgive director Alfonso Cuarón’s artistic licence this time; he has created a gratifying cinematic experience, which explores the depth of the human condition in a film which is as realistic as it needs to be. SS
The Universe Within – Neil Shubin THE UNIVERSE WITHIN is exactly as described on its cover: A Scientific Adventure. In fact, it
Penguin Group, 2013 £20
doesn’t really need the qualifier of ‘scientific’ - it is just simply a wonderful adventure. Beginning in the cold expanse of Greenland, the opening chapter follows author Neil Shubin on his first geological expedition; an interesting anecdote but perhaps the book’s weakest moment. The rest of the book is a masterpiece, with each subsequent chapter devoted to a time period, moving from the origins of the universe to the arrival of modern day humans. It is a book which manages to make you feel insignificant and yet simultaneously incredibly important. The early chapters on the universe and our planet bring home just how small we really are, yet Shubin also explains how deeply ingrained we are within the universe. There is a sort of poetry to his words throughout, and in one highlight, which I felt should have been his closing statement, he said “There is something almost magical to the notion that our bodies, minds and ideas have their roots in the crust of Earth, water of the oceans, and atoms in celestial bodies.” This book achieves what it set out to do, tying together all the key events in the universe so far. Perhaps its only flaw is the lack of a timeline, although the clarity of the words throughout means that this is a forgivable omission. All in all, a great read for anyone with a curious mind. TM
Mindless Eating - Brian Wansink
Hay House UK, 2011 £8.99
Lent 2014
IF YOU HAVE ever wondered how we know when we have had enough food, and why we may continue eating anyway, Mindless Eating will give you a clue to the answer. On average, people make over 200 food-related decisions each day, most of which are subconscious. In Mindless Eating, Brian Wansink, Professor of marketing and food psychologist, describes the wide range of experiments conducted in his laboratory-cum-restaurant. His goal is to investigate what external and hidden cues drive people to overeat, and how we can use this knowledge to our advantage, following each chapter with advice on how to alter one’s own environment to promote a healthier lifestyle. However, Mindless Eating is certainly not your typical diet book. Rather, it is a highly insightful exploration of human psychology, and how it might be applied to explain our eating behaviour. Full of entertaining answers to questions from what happens when people are served lunch in a ‘bottomless soup bowl’, to why M&Ms come in so many colours, Mindless Eating is an informative and engaging book shedding new light on a universally popular subject: food. MS
Reviews 5
EMILY PYCROFT
Genomes from Beyond the Grave
Charlotte Houldcroft discusses the search for ancient pathogen genomes
NAIAD
NIAID
Yersinia pestis was responsible for the Black Death
GENETIC EVIDENCE HAS been vital for identifying criminals since the first use of DNA fingerprinting in 1986. Now the burgeoning field of ancient DNA is revealing the identity of a very different class of killers – ancient plagues and pestilences. New technologies are uncovering the fascinating stories of the pathogens responsible for bubonic plague, the Irish potato famine, and leprosy. Piecing together the genome sequences of these ancient pathogens allows us to understand how epidemics develop, and can help scientists to understand whether or not these pathogens have the power to spread and kill on a pandemic scale in the future. DNA can be extracted from the remains of people and animals who died hundreds or thousands of years ago. However, these ancient DNA samples have typically degraded over time and are consequently composed of short fragments of DNA, which have different chemical characteristics to modern, longer DNA sequences. The genomes of two ancient human cousins, the extinct Neanderthals and Denisovans, have been reassembled using whole-genome shotgun sequencing. This is a technique that can piece together short DNA fragments without needing to know in advance which parts of the genetic code the fragments correspond to. As a result, it is now possible to isolate DNA from many ancient sources, including animals, plants and microscopic pathogens, and read their full genetic code. Leprosy, caused by the bacterium Mycobacterium leprae, used to be a common and debilitating disease. It is now rarely found in Europe and the developed world, but still flourishes in many developing countries, with 225,000 cases a year. Leprosy can cause damage to the nerves, skin, eyes and limbs, resulting in the formation of lesions and loss of sensation; secondary infections lead to the loss of tissue and the deformities classically associated with the disease. An important question remains: why did leprosy vanish from Europe before the arrival of effective treatments? Unlocking the
6 Genomes from Beyond the Grave
secrets of the ancient M. leprae genome has provided some clues. Starting with the bones of medieval leprosy victims from across Europe, researchers led by Verena Shuenemann isolated M. leprae DNA from the skeletal remains. Although extracting the DNA of bacteria from bones has been possible for 20 years, assembling whole genomes using this DNA has only recently become feasible. A key challenge has been isolating the DNA of the bacterium of interest from that of other abundant bacteria and the DNA of the human host. To achieve this, the researchers used DNA array capture technology, whereby artificial DNA ‘baits’ with sequences matching short segments of the known genome sequence of modern M. leprae separate out the DNA of the leprosy bacteria from that of other bacterial species and the human host. Interestingly, Shuenemann and colleagues found M. leprae DNA to be better preserved than host DNA from the same sample. This is probably because the unique waxy composition of Mycobacteria protected the DNA from degradation by heat, water and other bacteria. By comparing the ancient leprosy strains isolated from the skeletons to modern strains, they were able to trace the origins of those currently present in the Americas to Europe, and show that a strain found only in the Middle East today was once present across northern Europe. Unexpectedly, they found no evidence for a change in the virulence of leprosy over time, suggesting that environmental and social factors or an increase in a human immunity to M. leprae in Europeans led to the disappearance of leprosy in Europe. Future research on the disappearance of M. leprae from Europe will now turn towards studying what these factors could be. Another bacterium, Yersinia pestis, was responsible for one of the most infamous disease outbreaks in human history: the Black Death. From the earliest reports in Roman times until the early 18th century, bubonic plague caused epidemics across Europe. In the past decade small outbreaks have been Lent 2014
RASBAK
PUBLIC
reported in India and Madagascar. Interestingly, the modern mortality rates from bubonic plague are considerably lower than the devastating historical estimates, which put some regions of Europe at more than 50 per cent mortality. Therefore, there has been speculation about whether or not Y. pestis was truly responsible for the Black Death, and whether differences between the genomes of ancient and modern day Y. pestis can account for decreased mortality rates. Human teeth from the East Smithfield plague pits in London, which were used in the late 1340s specifically for the interment of plague victims, were potential sources of Y. pestis DNA. The same array capture technology, this time designed to enrich Y. pestis sequences, was used to ‘fish’ for the DNA of this pathogen. It was a successful approach: 94 per cent of the Y. pestis genome was sequenced with high confidence using this technology. Comparisons of this 700-yearold Y. pestis strain with modern strains of bubonic plague did not find sufficient genetic changes to explain the difference in mortality rates between modern outbreaks and the Black Death in the 1340s. As with leprosy, human evolution or an increased standard of living, including better hygiene and less over-crowding, have made this disease less deadly. Pathogens can also ravage populations by destroying food stocks. The Irish potato famine of 1845-1852, or the Great Hunger, occurred when repeated failures of potato crops led to mass starvation. 2013 saw the publication of the genome of Phytophthora infestans, an algae-like organism responsible for the massive potato die-off, sequenced by Kentaro Yoshida and colleagues. The population has still not returned to pre-famine levels, due in part to the significant death toll and mass emigration that followed the failure of Ireland’s staple crop. As the indirect cause of human death through starvation, the potato blight did not leave behind evidence of its identity in human remains, forcing scientists to go looking for more unusual sources of the pathogen’s DNA. The samples used to isolate P. infestans were rather different from the bones and teeth used in many
Mediaeval plague doctors wore ‘nose cones’ filled with pungent herbs to ward off bad smells they believed were responsible for the spread of the disease
other ancient DNA studies: here, 150-year-old dried potato leaves kept at Kew Gardens were used. The strain of P. infestans that caused the potato blight was revealed to have originated in the Americas, most likely in the US, at the start of the 19th century. While this strain of potato blight is now extinct, the study reveals the incredible power global trade had, even 150 years ago, to rapidly spread plagues and pestilences around the world. DNA sequencing of historic P. infestans also revealed that the modern strain of this pathogen is actually more virulent than the agent that caused the Great Hunger. Modern pesticides keep potato blight at bay, but there was an outbreak, resulting in famine, in Germany during WWI, as a lack of resources prevented crop spraying. The recent reconstruction of ancient pests and plagues no longer present in Europe raises the question of where DNA sequencing technology could take us next. Historical disease detectives no longer have to rely on ancient reports of a disease’s symptoms, or hope for an infectious agent that leaves marks on bones or other preserved matter. It is now possible to go directly to suspected victims and sequence ancient DNA in search of a known suspect. With increases in the sensitivity of ancient DNA recovery, and better tools for distinguishing the DNA of a real pathogen from a modern contaminant, scientists will be able to study historic plagues of unknown cause – and perhaps reveal the identity of an ancient killer.
Phytophthora infestans caused the Irish potato famine
Charlotte Houldcroft is a temporary lecturer in the Department of Archaeology and Anthropology
Lent 2014
Genomes from Beyond the Grave 7
Josie Best
Making sense of the senses
Toby McMaster explains how the quest to understand hormones changed our appreciation of sensation
8 Making Sense of the Senses
key papers were published independently, by A.J. Clark and J.H. Gaddum. The pair were sticklers for data, and began quantifying pharmacology, allowing concrete measurements to be made and predictions to be tested. As in so many areas of science, this introduction of rigorous maths proved invaluable to spurring progress. However what really set apart this new research, was the use of two drugs simultaneously. Clark used frog hearts to show that a higher concentration of acetylcholine was required to produce the same effect when given with atropine, the poison from deadly nightshade. However Clark himself failed to conclude that these two compounds might compete for one receptor. Gaddum simultaneously showed a similar phenomenon in rabbit uterus and in 1937 derived the relatively simple equations quantifying the effects of two opposing compounds as they compete for one receptor. This phenomenon is now known as ‘competitive inhibition’. Jewish German turned Italian citizen Heinz Schild had an illustrious research career, which included
Rosana Prada
Atropine, the poison produced by deadly nightshade, was used to demonstrate competitive inhibition
let me tell you a story. It begins in Cambridge. It contains hard work, ridicule and, as in all good stories, unexpected triumph. It ends - well you’ll have to wait to find out. In 1905 John Langley, a scientist in Cambridge studying the mechanism of hormone action, became the first person to use the term ‘receptive substance’. His experiments led him to propose that a molecule may cause a biological phenomenon by physically interacting with such a ‘receptive substance’. To many school-aged biology students, this is now obvious. Drugs bind to receptors, surely everyone knows that? However just over a century ago, nobody did. Around this time German immunologist Paul Ehrlich had similar ideas. His ‘side-chain’ theory, based on the observation that chemicals’ selectivity for certain tissues cannot be accounted for solely by the path they travelled, supposed something on cells’ surfaces must be causing the difference in action. However the scientific community was far from receptive: Walther Straub, an eminent pharmacologist, described Ehrlich’s postulation of a general theory of binding between drugs and cells as “too far” and “not fruitful”. However Straub wasn’t without motives, having proposed his own counter theory for drug action. This potential-poison theory hypothesised incorrectly that a drug’s effect depended on its concentration difference between the inside and outside of a cell. In 1908, whilst working in Langley’s lab in Cambridge, Archibald Hill published a mathematically minded paper far ahead of its time. Hill studied the time taken for frog muscle to contract in response to nicotine, showing it fitted very closely the equation expected if there were “a combination between nicotine and some portion of the muscle.” However, Hill never became the renowned pharmacologist he might have, had his ideas been taken more seriously. Still receptor theory was ridiculed, if discussed at all. By the 1930s a large amount of experimental evidence supporting receptors had accumulated. In 1926 two
studies on antagonists – compounds blocking the effects of hormones or drugs. His work provided further mathematical evidence for the receptor concept through a novel graphical method for determining affinities, however pharmacology at large still appeared distinctly uninterested in the idea that receptors might actually exist. Even many individuals influential in Lent 2014
Lent 2014
setJoJJosie Best
ELLy SMITH
activates the G protein, which splits into two parts; both of which trigger functions within the cell. Two decades later it is difficult to believe that receptors were ever controversial. They are ubiquitous in biology: allowing us to sense the world around us, use hormones to control our bodily functions, and are targets for countless drugs. GPCRs’ beauty lies in both their simplicity and complexity. The concept – detect a signal, change shape, activate a G protein – is incredibly simple. Yet the structural changes taking place are so complex we still don’t fully understand them. Small variations in the detection step allow this super family to orchestrate our vision, and senses of smell and taste. Last year Brian Kobilka and Robert Lefkowitz won the Nobel Prize in Chemistry for their work on GPCRs, signalling the end of one chapter of our story - the discovery of these versatile receptors. However the next promises to be both fascinating and revolutionary. There are almost 1000 GPCR varieties in our bodies, including many ‘orphan receptors’ of unknown function. The race is already on to discover their roles and to produce drugs to specifically activate or block certain receptors with known roles in disease.
A signal from outside the cell causes a shape change in a GPCR. This causes the associated G protein to exchange a GDP for a GTP. This allows the three alpha subunits to dissociate from its beta gamma counterpart.
Over 30 per cent of all drugs target GPCRs.
Kobilka and Lefkowitz shared the 2012 Nobel Prize in Chemistry.
Bengt Nyman
the discovery of receptors were incredibly sceptical of their own work. Raymond Ahlquist helped discover two different adrenaline receptors, but in 1973 said of them that it was “…so presumptuous to believe… receptors really exist. To me they are an abstract concept.” The scientists looking to isolate these ‘abstract’ receptors and study them in isolation thus faced the challenge of inventing new experimental techniques, but perhaps even more daunting than this was the prospect of overturning half a century’s worth of scientific dogma. Researchers first identified radioactive compounds which bound the receptor. Since radioactivity is easily detected, these allowed analysis of the receptor’s binding characteristics. Though this was a major achievement in itself, researchers still needed to isolate receptors such that they could be studied individually. Receptors are so sparse within living tissue that isolating the chosen beta2-adrenoreceptor, required 100,000 fold purification. This was achieved using novel ‘affinity chromatography’ methods, which involve passing candidate proteins through a column with molecules capable of binding the receptor fixed to its inner surface. Molecules remaining in the column are therefore receptors. These can then be washed through separately from other proteins, giving a pure receptor isolate. Affinity chromatography, along with many other novel identification techniques, meant scientists could finally determine the amino acid sequence making up the receptor. At the time only two other receptors of any kind were sequenced both capable of detecting light. The two receptors were found to share large proportions of their sequences and several structural features – such as crossing the cell membrane seven times. These features were therefore assumed to relate to their ability to detect light. However, to the surprise of the entire scientific community, the same features were present in the newly sequenced beta-2 adrenoreceptor. No one had anticipated this – why should proteins with such distinct functions be so similar in structure? Within just a few years labs around the world found more and more proteins with this structure and within two decades, the ‘abstract concept’ of receptors had found its way into the mainstream, and the first and major superfamily of receptors, the G Protein Coupled Receptors (GPCRs), were well established. All GPCRs are proteins crossing the outer membrane of a cell seven times. Most have specifically shaped binding pockets, allowing only certain molecules to bind. GPCRs tell the inside of a cell that a signal unable to cross the membrane is present outside the cell. The binding of a molecule to the GPCR causes specific shape changes in the receptor, enabling the GPCR to catalyse the swapping of a GTP molecule for a GDP. This
Having begun our story in Cambridge we have come full circle. Much of the current research into GPCRs is taking place right here right now. We have seen how Langley’s concept gradually gained support through years of often apparently unrelated research, and how this led to the discovery of the GPCR superfamily which holds so much promise. The end of the story? I’m afraid you’ll have to wait for that, the age of receptors is only just beginning. Toby McMaster is a 2nd year biological natural sciences student at Jesus College. Making Sense of the Senses 9
Doug Jennings
Bubbles of Trouble
Robin Lamboll explores how different organisms have coped with life in boiling water
freshnfunky
Tiny bubbles form behind the flippers of fast-moving fish
10 Bubbles of Trouble
how can life bring water to the boil? Humans
now manage it conveniently with fire, kettles, and even laser beams. Nature, on the other hand, has struggled with the process of boiling water, and creating microscopic bubbles, for millions of years. However, when successful, the process performs several important functions for plants and animals alike. Furthermore, scientists have learnt that initiating similar reactions in patients can have important medical benefits. Like most substances, water has a boiling point that decreases with lower pressure and, at sufficiently low pressure, that boiling point can go below room temperature, right down to 0°C. The easiest way to naturally stimulate this process is by swimming. If something moves through a body of water, it generates regions of low pressure around it, as the water rushes into the space it leaves. Humans and small fish do not swim quickly enough to cause the pressure differences needed, but strong swimmers like dolphins and tuna do. At speeds of about 10 meters per second, tiny bubbles form behind their flippers. The bubbles that form are not particularly hot, but they are generally unstable. The gas within the bubble cannot retain enough pressure to counteract the water tension and the surrounding pressure, so they collapse. For such a tiny bubble, this reaction releases huge amounts of energy, enough to superheat the surrounding region to thousands of degrees. The water has been stretched to breaking point, leaving a void or cavity inhabited only by a few water molecules–hence, the process is called cavitation. Temperatures above 5000°C are observed when the cavities implode, comparable to the surface of the sun, and little flashes of light can be seen. Dolphins appear to limit their top speed so that they do not experience this painful phenomenon, but tuna lack nerves in their bony fins and so are sometimes seen with pits and scarring from cavitation collapse. This
LowJUmpingFrog
Dolphins limit the speeds at which they swim to avoid the consequences of caviation
damage is also a common sight on ship’s propellers. What is bad for fish is good for the shrimp that want to eat them. Pistol shrimp have evolved to make one pincer fit neatly into a groove on the other. This movement forces a jet of water out, at high speed but very low pressure, causing cavitation. The sound produced can be deafening; at around 200 decibels, they compete with whales for the title of loudest creature in the ocean, and the noise regularly befuddles sonars. This event is accompanied by a flash of light and is used to communicate. It is also a deadly weapon. They get the name ‘pistol shrimp’ from using the jet to stun or even kill its prey. If you think you might want one of these cool creatures in your aquarium, be careful–the sonic blasts can crack glass. Mantis shrimp also get in on the action. While they have sharp claws, used directly to spear their prey, they also move fast enough that they can cause cavitation. This ability means they will often stun prey even if they miss. Cavitation is not just a problem for sea creatures though–it can also happen high above sea level. Trees need to raise water tens of metres into the air to reach their tips, and to get water to rise up, you need a pressure difference between the top and bottom of the tree is needed to counteract the force of gravity. To move water higher, a higher pressure is needed, but 1 atmospheric pressure can only get Lent 2014
Some trees rely on growing new xylem after each winter, but others also try to repair old xylem by increasing the pressure at the roots above atmospheric pressure. However, the pressure achieved at the roots is not enough to get the water to the top of the tree, so there is on-going debate as to exactly how trees manage to heal embolisms. Cavitation has provided great challenges for life, and it has generally been overcome–sometimes through ways humans cannot yet grasp. Though we may not always understand it, humans use cavitation in some medical situations. When treating kidney stones with ultrasonic blasts, some of the erosion of the kidney stones is caused by the sonic waves inflating gas bubbles until they collapse, causing powerful jets. Ultrasound treatment must work in bursts to allow this energy to dissipate, as the disruption can shield later sound waves or damage nearby tissue. Researchers have also developed a method of using microsecond laser pulses to generate two nearby bubbles, one after the other. These bubbles expand into each other and then pop. Whereas normally cavitation collapse produces shockwaves or jets in random directions, the twobubble interaction allows a controlled, directional microjet that can be used to carve a pore of a few hundred nanometers in cell membranes without killing the cell. This process opens up the cell, so we can easily introduce new genetic material or large drugs into it, showing great potential for both detailed biological research and new therapies. These bubbles may be tiny, but they impact on life of all sizes–from tiny shrimp to huge trees. With a little forethought, we can change their reputation as a nuisance on ship’s propellers to that of a powerful tool to investigate and improve the functioning of our own bodies.
John Loo
water to go up about 10 metres – quite a problem for trees like the giant sequoia, which can far exceed 100 metres. An even greater pressure difference is needed if water is to actually move up the tree rather than remain stationary. Trees draw water up to the leaves by capillary action, which is the attractive force between the water and the surrounding surface. How high the water can go in a capillary depends on how narrow the container is. In the leaves, the tubes, called xylem, branch out, narrow enough to pull the water much higher than any tree. However, the tree saves material by making its trunk out of fewer, larger xylem. These xylem are too wide to make water rise more than about a meter, so the water must be drawn up to the leaves by a strong pressure gradient. At the bottom of the tree there is typically only atmospheric pressure, so to draw water up very far the pressure must be negative in most of the trunk. What does negative pressure mean? Basically it means that instead of the surroundings pushing on the water, they pull it along. In solids, this process would be called tension. Gases do not have strong intermolecular forces to stretch, but the bonds in liquids can be pulled–though only so far. According to most evidence, the solution is incredible–trees keep water in a metastable state, under sufficiently negative pressure that the bonds would break if there were a defect in the xylem where a tiny void could form. Water is strongly attracted to other water molecules and to the sides of the xylem, and needs to overcome these short-range bonds to become a gas, so it usually remains liquid. However, there is always a risk of an embolism–a bubble of gas–forming in the xylem. In droughts or frosts there is extra strain, so embolisms often occur. Air expands in the xylem and cannot sustain negative pressure, which means the xylem cannot carry water until the embolism has gone.
The giant sequoia can move water over 100 metres and counter cavitation problems by means we don’t yet understand.
Robin Lamboll is a 1st year PhD student in the Department of Physics
Prilfish
The mantis shrimp exploits cavitation to generate a shockwave that dismembers its prey
Lent 2014
Bubbles of Trouble 11
elly smith
In search of a cure for black bone disease
Wing Ying Chow explains how we might find possible treatments for a rare disease
Alkaptonuria sufferers must regulate their intake of meat, as it contains a source of phenylalanine
The Alkaptonuria Society is working to find a cure for black bone disease
joints, scraping painfully with every movement. This is the experience of many of those living with alkaptonuria, a rare genetic disease affecting nearly 1000 people worldwide. Alkaptonuria involves the disruption of a metabolic pathway for degrading two amino acids, phenylalanine and tyrosine. As a result, homogentisic acid (HGA), an intermediate, builds up and circulates in the blood at high levels. Unlike the buildup of intermediates in many other metabolic diseases, HGA is not actually immediately harmful. It is highly water-soluble and thus easily removed via urine. Alkaptonuria patients can excrete grams of HGA daily. It is the HGA in urine that gives rise to the main symptom of the disease. On contact with air, HGA oxidises and turns the urine black. The diagnosis of alkaptonuria in newborns is often brought about by the sight of black urine in nappies. As alkaptonuria patients age, the circulating HGA accumulates in structural tissues, particularly affecting collagenous tissues such as joint cartilage. Through a process known as ochronosis, which is still not well understood, the accumulated HGA oxidises into a dark brown pigment that cannot be removed. HGA also accumulates in heart valves, spinal discs, trachea, and the kidneys, which visibly darken. Dark spots are sometimes observed in the whites of the eyes and ear cartilage. As well as pigmenting tissue, ochronosis prevents natural turnover and repair processes, and leads to mechanical changes in cartilage, making it hard and brittle. Ochronotic cartilage fails to provide shock absorbance and usually leads to abrasive action with any joint movement. By the time they reach their thirties, many patients suffer from symptoms similar to osteoarthritis as a result of the deterioration of their joints, and will often require their first joint replacement before
12 In search of a cure for black bone disease
they turn forty. Many alkaptonuria patients will eventually require multiple joint replacements and experience significantly reduced mobility and pain in their daily lives. Ochronosis, once it has set in, also leads to cardiovascular problems and recurring kidney stones, with little that can be done to restore the tissues. The main hope for a cure for alkaptonuria is to reduce HGA production before the onset of ochronosis. In the course of searching for a treatment, the genetic and biochemical basis of alkaptonuria has been investigated. While there are only 64 diagnosed alkaptonuria patients in the UK, the incidence of alkaptonuria varies from an average of 1 in 250,000 to higher in particular ‘hotspots’ found in countries such as Jordan, India, the Dominican Republic, and Slovakia (for which the incidence is 1 in 19,000). Some of these hotspots are believed to be linked to the cultural practice of consanguineous marriage, that is, marriage between relatives, such as first cousins. Alkaptonuria is a recessive disorder; the mutation may spontaneously arise in a person, the ‘carrier’, such that they have one functional and one defective gene, but appear healthy otherwise. Consanguinity between the offspring of a carrier increases the chance that future offspring will have two copies of the defective gene and thus have the disease. The enzyme that is affected in alkaptonuria is hexameric, hence there are many genetic mutations that could disrupt the assembly of the functional enzyme and lead to the disease.
alkaptonuria society
vauvau
imagine having sandpaper between your
Lent 2014
Lent 2014
alkaptonuria society
Alkaptonuria was first reported by Sir Archibald Garrod in 1899. Now, more than 100 years later, there is still no treatment. One alkaptonuria patient, who also has hepatitis B, was found to stop producing HGA in his urine after receiving a liver transplant. However, as alkaptonuria is not an immediately life-threatening disease, it is difficult to justify giving a transplant to an alkaptonuria patient who is otherwise healthy in preference to other patients with more severe liver failure. In principle, a full liver is not required as a single lobe can grow back to full size. However, taking even part of a liver from a live donor is fraught with complications, and the receiver will have to take anti-rejection drugs for an extended period, possibly for their whole life, making this treatment route unfavourable. Another method to reduce HGA would be to use enzyme or genetic therapy to deliver an enzyme, or gene that can produce the functional enzyme, into liver cells of alkaptonuria patients. Thus, HGA could be processed as intended, and eventually reduced to a normal level. While promising in principle, the mechanism for delivery of enzymatic or genetic material is still being developed, with no clinical candidates yet. Moreover, even in the most optimistic timescale from initial development to clinical use, it would be 5–10 years before these treatments reach alkaptonuria patients. The most promising treatment to date is the drug nitisinone, which was first developed as a weed killer and then found to be useful for treating hereditary tyrosinemia type 1, a severe metabolic disorder that is fatal in childhood if left untreated. Nitisinone inhibits the enzyme that produces HGA in the defective metabolic pathway. Taking nitisinone daily can return blood HGA levels to normal. As nitisinone is already used in treating tyrosinemia, it is known to be relatively non-toxic. However, using nitisinone to treat alkaptonuria is not yet an approved procedure, and does have side effects: it increases the blood tyrosine level, which can lead to blurring of vision, skin rashes, and potential cognitive effects, especially in children whose brains are still developing. Thus, nitisinone is usually administered in conjunction with a low protein diet in order to minimise these adverse effects. As the long-term safety of nitisinone is still unclear, further clinical trials are necessary before it can be approved as a treatment for alkaptonuria. While NHS patients can receive the drug offlabel, the limited number of patients and funding meant that a UK trial was not carried out. In
Nitisinone was first extracted from the bottle bush plant
addition, placebo controlled trials are not possible as patients can easily find out if their urine has stopped turning black. The Cambridge-based patient group Alkaptonuria Society has recently secured funding to launch two nitisinone trials, and ran a successful Indiegogo crowdfunding campaign, raising 121,000 USD to assist patients in accessing these trials. The society is also highly active in raising awareness, educating patients, and collaborating with scientists and clinicians. In October 2013, the seventh international scientific workshop on alkaptonuria was held in Liverpool, showcasing a diverse range of research into understanding and tackling the molecular basis and pathologies of this disease. Although not all rare disease sufferers have such good links with scientists and funding bodies, the Alkaptonuria Society is leading by example, inspiring the development of a new charity called Findacure. This aims to act as an umbrella organisation to help other rare disease sufferers form strong patient groups. In addition to improving the lives’ of sufferers, the study of rare diseases can help us understand processes that are fundamental to health. Indeed, Alkaptonuria has already provided insight into the more common process of osteoarthritis. Sir Archibald Garrod, the early 20th century physician first to postulate a cause of black bone disease, wrote, “The study of nature’s experiments is of special value; and many lessons which rare maladies can teach could hardly be learned in other ways.” The Alkaptonuria Society provides a precedent upon which future efforts to understand and tackle “nature’s experiments” can be built; the outcomes may well benefit all of us. Wing Ying Chow is a postdoctoral researcher in the Department of Chemistry In search of a cure for black bone disease 13
Elly Smith
Tapping into New Water Sources
Digory Smith discusses the issue of water shor tage and the technology employed to meet demand
Drinking water is expected to become a valuable resource in many parts of the world
I discovered that due to the scarcity of fresh water there, most drinking water is either imported or created by desalination—the process of removing salt from sea water. This scarcity of water is a problem that plagues many small islands or particularly arid regions of the Earth. In Arizona the land occupied by the Navajo tribe has been hit by a 20-year drought, making clean water one of the most precious commodities. Navajo families are required to drive hundreds of miles just to collect water, an act which has become known as ‘water hauling’. Perhaps more surprisingly, Singapore has also faced a water crisis over the last few decades. Singapore is made up of 63 tropical islands at the southern tip of the Malay Peninsula and is regularly drenched in torrential rain. In fact, it experiences around three times more rainfall than Britain as it is subjected to almost 2400mm a year. It is also the third most densely populated country in the world with over 5 million inhabitants crammed in just over 700 km2, about twice the size of the Isle of Wight. Due to this incredible density (30 times higher than the UK), and the practical inability of capturing 100 per cent of the rainfall, Singapore is unable to meet its water demands and is already reliant on imports from neighbouring countries. As of 2009, 40 per cent of Singapore’s water needs were piped across a 1 kilometre bridge from the Johor state of Malaysia. In the next half-century water demand is expected to double and yet the country’s water authority, the Public Utilities Board (PUB), maintains it can end its reliance on foreign imports. Currently the authority supplies water from four main sources known as the ‘Four National Taps’: water from local catchment areas, imported water reclaimed water, and desalinated water. Through ongoing investment the country hopes to obtain 25 per cent of their demand from desalinated seawater. Desalination, however, comes with a
14 Tapping into New Water Sources
myriad of environmental and financial drawbacks, not least the hugely energy intensive nature of the process. Can ongoing research and new technologies meet demands without an unhealthy carbon footprint? And will we soon see desalination as a real option in satisfying our growing water needs? In order to appreciate the technology behind most desalination plants it’s important to first understand the phenomenon of osmosis. Osmosis describes Paul Hudson
on a recent trip to the Greek island of Santorini
the flow of water through a partially permeable membrane from a less concentrated solution to a more concentrated one. Current desalination plants are so energy-hungry because they fight against this osmotic force. Seawater is typically placed in a chamber containing a semi-permeable membrane, which allows the passage of water but blocks salt. By applying very high pressure, the seawater is forced through the membrane, producing pure water. This pressure moves water in the opposite sense to what osmosis dictates and hence this process is known as reverse osmosis. Apart from the high running cost of these traditional plants, there are also environmental problems associated with them. When salt water is sucked from the oceans, marine life can be drawn in and ecosystems disturbed. At the other end of the Lent 2014
Tiago Fioreze
techniques. Neal Tai-Shung Chung’s team is using nanoparticles to create a higher performing draw solution, whilst a team headed by Hui Ying Yang is incorporating nanotubes into the membrane to increase their efficiency. Although desalination plants might solve the problem for countries surrounded by oceans, how can it help the occupants of land-locked states like the Navajo tribe in Arizona? The answer comes by looking 120 metres beneath the dry earth; there we find aquifers, large areas where the ground is saturated with water. In Arizona these underground reservoirs are brackish, containing large amounts of salt and some poisonous elements like arsenic. One proposed desalination plant will harness the Sun’s energy to boil water drawn up from the salty depths. The steam will then be passed through a series of membranes, producing clean water. Britain opened its first desalination plant in Becton, East London, in 2010. This plant supplies water to 400,000 homes in the Thames valley, a region previously classed as ‘seriously water stressed’ by the Environment Agency. Meanwhile there are plans to build desalination plants on many of the Greek islands scattered across the Aegean, within the next few years. As our climate continues to warm up due to the effects of global warming it is possible that desalination plants will become more widespread in the near future. A United Nations report in 2006 asserted that by 2025 two thirds of the world’s population are expected to be “living under water stressed conditions” and that “1.8 billion people will be living in countries or regions with absolute water scarcity”. With 97.5 per cent of the world’s water being salty and the increasing population demands, we will certainly be looking to the ingenuity of scientists working in these areas to keep the taps from running dry.
Peter Campbell
process, discharged brine can be extremely disruptive to some species that are sensitive to salt levels. For this reason efforts are taken to disperse the waste brine and screens are used to stop fish approaching the inlets. A breakthrough study from Yale University in 2004, showed that instead of struggling against osmosis we could instead harness it to create more efficient desalination plants, using a process known as forward osmosis. The researchers reasoned that as long as there is a more concentrated solution—the ‘draw’—on the other side of the membrane, water would be effortlessly sucked out of the salt water and into this solution. The team used ammonium bicarbonate as the draw; when this is heated to 40°C, ammonia and carbon dioxide bubble off leaving pure water behind. The gasses can then be recaptured and recycled to create a new draw solution. Importantly, it has been suggested that waste heat from power stations would be sufficient to drive this process. A company called Hydration Technology Innovations (HTI) has used forward osmosis in a novel and often life-saving way. They have developed a portable water filter containing a membrane and a pocket of sugars which act as the ‘draw’. By submerging the device into unclean water, like a puddle or even the sea, water migrates through and a safe drink is produced. However, the chief executive Walter Schultz warns this won’t be the optimal solution to our burgeoning water needs: “Our hydration products are intended for emergency use. It is a relatively expensive way of producing a clean drink.” Indeed they were recently used during the Haiti earthquake crisis. Singapore has yet to employ forward osmosis in any of its plants. However, this September it opened its second reverse osmosis desalination plant at the far south-western edge of the country, in its efforts to become a water-independent nation. In addition, research continues to optimise low-cost desalination
Desalination plants help secure the water supply of dry coastal areas
Digory Smith is a 3rd year undergraduate student at the Department of Chemistry. Since almost all of the world’s water is in the oceans, efficient ways of desalinating it are important for our supply of drinking water in the future
Lent 2014
Tapping into New Water Sources 15
Lines of Communication
FOCUS
BlueSci explores how the natural world communicates, from single cells to the birth of the digital era the world is shrinking. We are more
connected than we have ever been, and faster and more convenient ways of sharing our lives arrive by the minute. ‘Communication’ has been the buzzword of the last two decades, but the idea goes beyond online news feeds, beyond Skype and Facebook. Analyse the workings of the organic world at any level, be it the microscopic organelles of individual cells, or vast, intricately structured colonies of organisms, and you will see that in every Tweet and Snapchat is reflected a process vital for life. Whilst it may seem an obvious statement that communication is the transfer of information, it wasn’t until the late 1940s that this was formalised as ‘information theory’. Proposed by Claude E. Shannon to mathematically quantify communication, this branch of probability has had numerous applications in a huge number of fields, from economics to quantum physics. Furthermore, Shannon’s insight made possible the development of mobile phones and the internet. It might be said that it was he who laid the foundations of the digital revolution. Shannon schematized communication to include a source, a signal, and a receiver. The fundamental idea is that whilst the quantity of information transmitted from source to receiver is that contained within the signal, the information communicated depends on the increase in the receiver’s knowledge as a result of getting the message. If the receiver is fairly certain what the message is going to say, this increase is small, thus the information communicated is minimal, whereas if the receiver starts out ignorant, then the information communicated can be much larger. Therefore the amount of information acquired is an intrinsically subjective concept, dependent on the receiver’s expectations of the message. Information theory helps quantify and compare the efficiency of different communication channels and codes. For example, digital text written in the English alphabet is typically transmitted in American Standard Code for Information Interchange (ASCII). Here, each character is assigned a sequence of zeros and ones of eight digits long (A being 01000001;
Mike Hauser
on
Focus 17
FOCUS
abode of chaos
Claude Shannon developed information theory, laying the foundations of the digital revolution
Hectonichus
Certain bacteria luminesce in the specialised light organs of the Pineapple Fish
18 Focus
B 01000010). This is eight bits, or a byte, of information. The number of possible arrangements of such a sequence is 256, thus allowing for coding of 256 different characters. Since words in English have on average 6 characters long communications systems using ASCII transmit about 48 bits of information for each word. Information theory tells us that a more efficient alternative would be to label the words rather than the letters. Assuming there are 500,000 words in English we would need to assign 19 bit labels to every word, in which case we only need 19 bits of information for conveying a word. If we consider that words like “the” appear more often than words like “hypochondriac”, we can use this probability to construct a more efficient code. Shannon arranged the English words according to the frequency of their use, and divided that list using a 50 per cent threshold. Words that accounted for 50 per cent of all usage were left in the upper part and the rest in the lower part. Shannon repeated this procedure giving a “0” to a word every time it fell on the upper half and a “1” when it fell in the lower side. As there are fewer words that are very frequent these words ended up having a shorter string than less common words, which produced a far more efficient code. Information theory is widely employed by intelligence services to develop techniques for communicating international secrets securely. Before the 20th century, linguistic tools were used to encrypt messages, but now computational and mathematical methods make codes much harder to break. The most sophisticated of these “cryptosystems” cannot be broken even if the adversary has unlimited computing power. An example of these is the onetime pad encryption, used first used in WWII and then again between Moscow and the United States during the cold war. In this encryption each character from the text is encrypted by the addition of a character from a secret random key (or pad) that has the same length as the text. It is impossible to decode the text if you do not know the key.
In the natural world, information transmission is present at all levels of organisation, from molecular or electrical communication between single cells to visual, auditory, tactile and even electromagnetic communication between whole organisms. If the information capacity of a signal is defined by the number of possible messages it can convey, communication systems in living organisms cover a wide range of capacities. For example, individual sensory neurons in the human nervous system can only communicate the intensity of a single stimulus at a particular location, which has relatively few possible states. However, communication between neural networks in the cerebral cortex has a much higher information capacity, capable of representing whole objects or ideas. One should be wary of implying that high capacity communication systems are more important than those with a low capacity. Take bacteria for example. By definition, these are single-celled organisms; but perhaps we shouldn’t think of them as individual lone-rangers, rather as microscopic ants contributing to a colony. Bacteria ‘talk’ using a molecular system of communication known as ‘quorum sensing’, which is only ‘switched on’ when the bacterial colony has grown to reach a critical population density. In this system, the molecular signal transmitted by each cell contains little information but can be used to activate a range of functions with maximum efficiency – some of which have important results for other species, including humankind Quorum sensing enables bacteria to coordinate their activity, in order to achieve functions that can only be effective when carried out in unison by a large population, rather than by individuals. For example, aggressive and defensive mechanisms, such as the production of toxins, would have negligible effects if performed by only one or a few bacteria, but can be devastating when carried out en masse: the effect of cholera toxins (produced by Vibrio cholerae bacteria) is just one of many examples. Reproductive mechanisms such as the transfer of DNA to (known as conjugation), and the production of spores are also triggered by quorum sensing, these being somewhat useless pastimes for the solitary bacterium. Perhaps the most aesthetically pleasing example of quorum sensing is in the organism in which the phenomenon was first discovered. Vibrio fischeri is a bacterium found throughout the oceans, though in barely noticeable amounts. It luminesces, but only when it inhabiting, as happens when it lives inside the specialised light organs of certain deep-sea organisms. The relationship between the two species is symbiotic: bacteria trade their light in return for Lent 2014
Lent 2014
bald-headed uakari monkey accomplishes both these purposes, with the exact hue of a male’s face influencing his mating success. Whilst neither intentional nor conscious, this signal constitutes communication as it transmits information about the male’s physical health and genetic fitness. Sick animals have paler faces, so those most resistant to malaria have the brightest faces. This may explain why female monkeys have evolved to prefer males with brighter faces. Visual signals are also used more purposefully; one example being the waggle dance of the honeybee. The signal communicates information about the environment, indicating the distance and direction of a food source, and thus guides its receivers to the resource. Some have argued that this sophisticated communication system is a language: it is governed by rules, symbolic, and always carried out in front of an audience. Others have countered that, unlike human language, it has no syntax, is almost exclusively genetically encoded, and its symbols are directly related to what they represent. Unlike visual signals, which take time to be noticed, vocal signals enable swift communication and so are often used to signal threats. The alarm calls of vervet monkeys are predator-specific, allowing individuals to respond to a threat appropriately– a leopard’s warning call causes them to take to the trees, whereas an eagle’s warning call causes them to run into dense bush. Many species, including birds, gibbons, and wolves, use song as a means of marking territorial boundaries, particularly if the information must be communicated to distant receivers or in environments where visual signals are hard to see, for example the dense rainforest canopy. Vocal calls are also used to coordinate group behaviour, as when a pack of African hunting dogs pursues prey. Here, communication of information enables cooperation, acting as a mechanism of unification. This is also the usual purpose of tactile signals, an important means of social bonding for
Ring-tailed lemurs annoint their tails with perfume and wave them above their heads to attract a mate
The bright red face of the male bald-headed uakari monkey signals its quality of health to potential partners
aaronmartin
the sugars and amino acids provided by the host. This arrangement has evolved twice in two different host species, which use the luminescence for quite different purposes. In the monocentrid fishes, such as the pineapple fish, the light is used to assist feeding at night, illuminating the small shrimp that are their prey. Conversely, in sepiolid squid, such as the Hawaiin Bobtail squid, the luminescence is used to cancel out their silhouette, so as to hide them from prey. However, quorum sensing also has a dark side. The bacterium Pseudomonas aeruginosa is so virulent it causes disease in organisms from the weed Arabidopsis thaliana all the way to humans, where it is responsible for the majority of deaths in cystic fibrosis patients. It is renowned for forming biofilms when concentrations of the bacteria get too high. Biofilms are groups of cells attached to a surface and their formation has serious consequences as it makes bacteria notoriously hard to kill with antibiotics. Quorum sensing is not invulnerable to disruption, however. Bacteria do it themselves by destroying other bacteria’s signalling molecules, rendering them unable to communicate. Synthetic inhibitors of quorum sensing would have widespread clinical applications, as biofilm formation could be prevented, making bacterial infections easier to tackle with antibiotics. A more every-day application would be to put these quorum-sensing blockers into toothpaste and so prevent the build-up of plaque. Now that should bring a smile to your face! Chemical signals, known as pheromones, are widely used in nature, not just by bacteria. Often, their purpose is to define territories through scent marking and to attract mates by signalling fitness or readiness to mate. The ring-tailed lemur, however, uses scent to compete for mates. Instead of signalling to the opposite sex, males signal their fitness to other males. They anoint their tails with secretions from the scent glands on their wrists, holding them above their heads and waving them as they approach a competitor in an attempt to drive him off. Pheromones are also used to guide foraging, with ants following pheromone trails left by others to find food sources. This can lead to the formation of a ‘circle of death’. If two ants both follow each other, or one ant follows its own trail, they move in a circle. Other ants from the colony, following their trails, join them and the ants then circle until they die of starvation or exhaustion. Of course, many of nature’s most striking displays are not chemical but visual signals, which are commonly used for species recognition and mate attraction. The distinctive red face of the
woodlouse
FOCUS
Focus 19
Forkhead box protein 2 is thought to have been crucial in the evolution of the human capacity for language
many animals. In wolves, behaviours such as muzzlelicking are used to signal submission in accordance with the pack hierarchy, while primates groom one another to both form and reinforce alliances. Amongst animal communication systems one stands out: our very own. Several properties distinguish human language from even the most complex of other animal communication systems. The first and most obvious difference is the fact that we can talk about virtually anything. As ordinary as discussing the weather or what you had for dinner last night might seem, in reality, the fact we can expound on such diverse topics is remarkable. Animal communication usually consists of a limited set of discrete messages with fixed meanings, whereas language is entirely ‘generative’: that is, its units can be recombined in infinite ways to convey information limited only by the imagination of the speaker. Although words are arbitrary symbols, it is possible they play a key role in linking our visual, auditory and tactile perception of the world. For instance, our idea of ‘a cat’ relies on mental representations of what a cat looks like, how it is meant to sound and what it feels like. It has been argued that the word ‘cat’ itself allows us to combine these attributes to form a cohesive concept used for thinking and communicating. Perhaps surprisingly, it seems that language can also shape the very structure of our thoughts: certain words convey unrelated connotations that can become associated with the more obvious aspects of the word’s meaning. One example of this is in languages that assign a gender to otherwise genderless words. In Spanish and German, inanimate objects have genders, some of which are reversed between the two languages. Spanish speakers asked to grade the objects these words refer to on a gender scale gave ‘manly properties’ to those whose words are masculine in Spanish, while German speakers thought of them with feminine characteristics. Human language is not only impressive because of the complexity of meaning each word can convey,
POPOFATTICUS
Language is generative: words can be combined and recombined to convey any message
20 Focus
EMW
FOCUS
but because of the information communicated by syntax: the organisation of words into sentences. For each language, syntax is governed by a set of welldefined rules, grammar. Knowledge of this allows us to decode the information contained in the structure of a sentence. For example, ‘Jane hits John’ means something quite different from ‘John hits Jane’: our knowledge of the English ‘subject-verb-object’ rule facilitates understanding of this. Interestingly, the processing of syntax by the brain appears to occur unconsciously. Researchers have recorded electrical currents from the brains of human participants, a technique known as EEG, whilst getting them to listen to sentences, some of which were grammatically incorrect. At the same time, the participants had to perform a complex task in order to distract them. Although most people showed characteristic ‘surprise’ responses to almost all the incorrect sentences, when explicitly asked about them they often did not spot the errors, indicating a lack of conscious awareness. It seems that comprehending language is an automatic process. However remarkable language is, whether it arose as a complete novelty in humans or as a modification of a precursor animal communication system, remains a hotly debated issue. Although the brain mechanisms that determine language are far from understood, it is at least clear that human speech employs neural circuits different from those used for vocalisation in animals. Quite excitingly, the genetic basis of humans’ capacity for language is beginning to be uncovered. For instance, mutation in one specific gene appears to be common in sufferers of a certain language impairment. This gene encodes for forkhead box protein 2 (FOXP2), responsible for activating many other genes. While it is not known how exactly FOXP2 might shape the neural circuits that allow language acquisition, it seems to be involved in the maturation of the brain and the development of the capacity for language. FOXP2 belongs to a highly conserved family of proteins, the human version is very similar to the non-human primate one. However, most of the differences that do exist occurred around the time that modern humans Lent 2014
appeared. This seems to be one of the events that shaped the evolution of the human brain. Impressive though human language is, one cannot ignore the fact that the song “You say it best when you say nothing at all” has been a threetime chart topper in the past four decades. Clearly, non-verbal communication has been a prevalent subject even in pop culture, and justifiably so – an estimated two-thirds of all our communication is without language. Non-verbal communication is the transmission of information, wordlessly between a sender and receiver(s). The different means at our disposal for this are use of the pitch and tone of voice (paralanguage), the eyes (oculesics), touch (haptics) and gesture (kinesics). This form of communication is among the first to be imbibed from our environment, certainly before we adopt the use of words and a system of language. Paralanguage deals with the intangible factors encountered in speech: tone, rhythm, pitch, and volume. In most cases, it is these factors that portray the emotion of a spoken sentence. For these reasons, speech can convey mood and feeling without the need for description, as opposed to text where sometimes the absence of paralanguage makes it difficult to discern intention. A common example we might relate to is the likelihood of misunderstanding text messages and e-mail as opposed to voice conversation, even if they are identical in content. Another communication channel particularly integral to human expression is eye contact. In fact, it is thought that the white part of the eye, the sclera, evolved in response to the social demands of our existence. The human sclera is more conspicuous than in other animals and may have made communication via gaze easier to interpret, aiding social interaction and learning in pre-verbal children. It is a widely accepted idea that emotions like happiness, sadness or anger can be better gauged by looking at the eyes than from a person’s smile or frown. It is a common notion that the eyes are a perfect giveaway of lies, as people with something to
hide exhibit minimal eye contact. However, this may not always be true. A study observed that in police interviews, both deceptive and truthful suspects maintained the same amount of eye-contact. Indeed, the liars, by blinking less often, sometimes appeared more convincing than those telling the truth. The basis of non-verbal communication is an open question, and debate often centres on the age-old nature versus nurture dichotomy, though in reality, both genes and environment are likely to play a role. Pre-verbal infants communicate through gestures and facial expression, only some of which are learnt from their environment. Identical twins who were raised apart display similar body language. Even nonhuman primates share some facial expressions similar to humans. The part of the brain responsible for instinctive responses, emotions and social judgement is the limbic system and to an extent may be preprogrammed to exhibit certain aspects of non-verbal communication. On the other hand, it appears that much of our non-verbal communication is influenced by culture. The same non-verbal expression may mean different things in different social groups and this understanding between the sender and receiver of communication becomes crucial. For instance, where eye contact may indicate confidence, honesty and attentiveness in some cultures; others consider bold eye contact as disrespectful. Similarly, kissing or even handshakes assume different levels of acceptability in different societies. Non-verbal communication plays a bigger role in our day-to-day life than we think, as it provides a means by which people may judge our personality. Given the job market of today, this idea is gaining increasing interest as it becomes apparent that one needs not only to ‘talk the talk’, but also ‘walk the walk’ to gain competitive advantage. Whether we are aware of it or not, it seems we are always communicating through these subtle channels. Try closing your eyes, standing still or walking out of the room; as long as there is an audience, every action will have communicated some information. We can choose to stop talking but we never stop communicating. Ornela De Gasperin is a 4th year PhD student at the Department of Zoology Nathan Smith is a 2nd year Natural Sciences student at Churchill College Tam Stojanovic is a 2nd year Natural Sciences student at Pembroke College
larsplougmann
FOCUS
‘Body language’ is a powerful tool for communicating feelings and intentions
The conspicuous white of the human eye is thought to have evolved to aid communication via gaze
Ana Leal Cervantes is is a 3rd year PhD student in the Department of Hematology alismiles
Shirin Ashraf is 2nd year PhD student in the Department of Immunology Lent 2014
Focus 21
No Monkey Business
The Scopes Monkey Trial took place in 1925 in Tennessee
On a hot summer’s day in Dayton, Tennessee, a most heated debate over the role of religion in dictating scientific teaching was taking place on the lawn outside of the courthouse. Watched by a crowd of unprecedented size, two of the most formidable figures on each side of the evolution debate tried to discredit the other. Immortalized in film, the so-called ‘Scopes Monkey Trial’ has become a key turning point in the struggle between religion and scientific teaching in schools. The stage for the trial was set in 1925, when the Butler Act was passed in Tennessee. Helmed by State Representative John Butler, previously the head of the World Christian Fundamentals Association, the Butler Act aimed to prevent the teaching of evolution in publicly funded classrooms (though strictly only preventing the direct statement that humans were not created by God as laid out in Genesis). However, despite being signed into law by Governor Austin Peay, it remained unenforced throughout the state. Indeed, the biology textbook required for use by all biology teachers in the state included a chapter detailing the theory of natural selection, including the evolution of humans. As the well known version goes, a high school teacher in a small Tennessee town was then brought to trial for teaching human evolution to his biology class. In fact, the American Civil Jens Rost
alex Eflon
Sophie Harrington revisits the Scopes Monkey Trial of 1925
46 per cent of Americans believe God created man in their present form
Liberties Union offered to finance a test case against the law, hoping to have the law struck down as unconstitutional by the courts. The choice of John Scopes as the teacher on trial was 22 History
essentially arbitrary. The original idea for Scopes to take on the test trial came from George Rappleyea, a community leader, who believed the trial would bring muchneeded publicity to the small town. Rappleyea was able to enlist members of the local school board to support his idea, arguing “As it is, the law is not enforced. If you win, it will be enforced. If I win, the law will be repealed. We’re game, aren’t we?” Actually a physics teacher, Scopes was approached by the group and asked to plead guilty to teaching evolution during a day substituting in a biology classroom. While Scopes wasn’t even convinced that he had, in fact, taught his students about the evolution of man, he agreed to go along with the test case. Although the trial was initially going to be tried by local prosecutors, it soon became clear that each side would try to bring in the most high-profile teams possible. For the prosecution, the clear choice was William Jennings Bryan. A three-time presidential nominee and former Secretary of State, Bryan was also renowned for being a staunch Presbyterian and supporter of the Butler Act. His bombastic rhetoric and high national profile ensured that the case would be able to grow in controversy. Clarence Darrow, a well-known agnostic, offered to head the defence team, roused by the threats he saw posed towards rational scientific inquiry and teaching. Two such strident minds seemed poised to raise the trial beyond a mere discussion of constitutionality and instead to a pitched battle between two opposing forces, each damaging the world order in the eyes of their opponents. As the trial got underway, the intense media attention that suddenly focused on the Scopes Trial and on small-town Dayton was exactly what Rappleyea and his co-conspirators were hoping for. Nationally there was unprecedented interest in the trial, framed as the ‘Monkey Trial’ and picked up by all the main national and international newspapers. As curiousity-seekers flooded Dayton, filling the courthouse to capacity and spilling out into the lawn, the trial was broadcast live over national radio, the first time for any trial. The nation really had been gripped by Scopes fever. Lent 2014
Lent 2014
Smithsonian
The choice of John Scopes as the teacher on trial was essentially arbitrary
Tennessee and what followed was one of the most dramatic public duels between those favoring rational scientific discourse and those supporting the imposition of religious views into scientific teaching. While this was over 75 years ago, little has changed since. In 2012, 46 per cent of Americans agreed with the statement “God created humans in their present form”, whilst 15 percent agreed with the statement “Humans evolved, but God had no part in the process.” It’s easy to brush aside the Scopes trial as an artifact from a less educated era, but it seems the beliefs of many remain stoically opposed to scientific evidence. Darwin’s ideas of evolution have changed the way we see ourselves
Possan
Initially the defence had planned to argue that the Butler Act stripped teachers of individual freedoms by preventing them from teaching evolution, and was as a result unconstitutional. This soon changed into arguing that the theory of evolution and the creation of man in the Bible were not contradictory. As the trial progressed, attacks were increasingly made on the position of the Bible in scientific thought, as well as on the veracity of the Bible itself by the defence. After Bryan spoke against evolution, bemoaning the fact that humans were “Not even [descended] from American monkeys, but from old world monkeys”, Dudley Field Malone, a member of the defence team, gave what was widely considered to be the triumphant speech of the trial. Malone argued against the place of the Bible in scientific inquiry, saying it should be limited to the study of theology and morality. There was no space for a ‘duel’ between the Bible and evolution as “there is never a duel with the truth.” It’s been suggested that Bryan complied with the next, very outlandish, request from the defence as a way to recover his standing from Malone’s verbal drubbing. Having called Bryan to the stand to demonstrate the foibles of relying on the Bible for historical accuracy, Darrow proceeded to grill him on many aspects of scripture, from the creation of Eve from Adam’s rib to where Cain found his wife, in an attempt to demonstrate its unscientific nature. Or, as Bryan put it, “to cast ridicule on everybody who believes in the Bible.” Despite such dramatic scenes in the court, sparking discussion around the United States, the trial itself ended on an underwhelming note. Scopes was found guilty of teaching evolution and fined $100 by order of the judge. The case was then appealed by the defence and brought before the Supreme Court of Tennessee. This time the defence’s argument focused on the less polarising but perhaps more crucial issue of the Butler Act, arguing that it unconstitutionally protected one religion above others. However the defence failed to achieve the result they hoped. The guilty verdict of Scopes was overturned on a technicality, preventing further appeal to a higher court. It would take until 1967 for the Butler Act to be repealed and until the following year for the Supreme Court of the United States to overturn any other such bans. Despite this, the trial has often been hailed as a victory for scientists, particularly after the dissection of Bryan’s belief in the historical accuracy of the Bible. In 1925, evolution was put on trial in
Sophie Harrington is a 2nd year Biological Natural Scientist at King’s College History 23
Designer Babies Maria Mascerenhas discusses the issues surrounding Pre-Implantation Genetic Diagnosis
Removing a cell for analysis at this stage will not affect the embryo’s development
24 Science and Policy
serious genetic disorder that would be passed onto that child. What would you do? In 1989, a couple from the UK who were at risk of passing on a severe form of mental retardation on to their children became the first parents to use pre-implantation genetic diagnosis in order to avoid passing on this condition to their child. Although in vitro fertilisation – the technique behind ‘test tube babies’– had already been in use for 12 years at that time, it was the pre-implantation test performed by Professor Handyside and his team in the Royal Postgraduate Medical School in London that allowed for an early diagnosis and eliminated the need for termination of a pregnancy. In this diagnostic test, the embryos derived from the fertilisation procedure are grown for 2 to 3 days in the laboratory until they consist of 8 cells, of which one or two are removed. These cells are analysed to detect particular genetic abnormalities and the unaffected embryos transferred to the womb, where they are allowed to develop. In the UK, pre-implantation genetic diagnosis is governed by the Human Fertilisation and Embryology Act 2008. While initially regulated on a case-by-case basis, since 2009 the Human Fertilisation and Embryology Authority has established a condition-by-condition system whereby a clinic licensed for the procedure can carry out testing for any condition previously approved. In order for a particular condition to be approved for testing, this authority takes into consideration age of onset, the symptoms of the disease and their variability, the penetrance, availability, and efficacy of treatments. The approved conditions fall into three broad categories of genetic abnormalities: gene defects, chromosomal abnormalities and sex-linked conditions. Currently, pre-implantation genetic diagnosis tests for early onset Alzheimer’s disease, Cystic Fibrosis, Down’s syndrome, Haemophilia and Muscular Dystrophy, among others, totalling 263 conditions. This diagnostic test also tests for faulty genes which significantly increase the chance of an individual developing a serious disease. For instance, the BRCA1 gene, as its presence leads to a higher susceptibility to breast cancer. Another application of pre-implantation genetic diagnosis has been for cases of ‘savioursibling’ to ensure that the resulting child is a
Anna Tanczos
imagine you wanted to have a child but had a
compatible donor for an existing sick sibling. Pre-implantation genetic diagnosis raises a myriad of ethical and moral concerns. When dealing with an area of science involving the generation of human life, a heated discussion is bound to arise. Whilst the majority of the general public views pre-implantation genetic diagnostic as worthwhile and justifiable, if it alleviates suffering and can save lives, a minority considers it interference with nature and rejects this procedure. The most controversial facet of this procedure is its potential future use in creating ‘designer babies’. Embryos could be selected for certain physical and behavioural traits, which could lead to an attempt to establish a so-called ‘perfect’ society and bring about another form of eugenics. Unlike the UK, many countries lack any regulatory framework for this pre-implantation test. In the US 9 per cent of clinics are now offering embryo sex selection for non-medical reasons. In Sweden, gender selection has been legal since 2009. American consumer genetics company 23andMe was recently granted a patent that includes the possibility of embryo screening for physical traits, though they vehemently deny having any plans to explore this section of their patent. Pre-implantation genetic diagnosis has helped many families and prevented children from suffering. But despite the beneficial impact this procedure has had, the question remains: who can stop these countries and companies from creating ‘designer babies’ on demand? The current absence of an answer may yet lead society down a very dangerous path. Maria Mascarenhas is a 4th year PhD student in the Department of Haematology Lent 2014
Counting Out Loud Matthew Dunstan explores the complex interplay between language and numbers
Ma
rmass
e
The Oksapmin people count on 27 body parts
Lent 2014
Sometimes these differences can exist even in the same base language. The classic example of this case is French, and its confusing mixture of decimal and vigesimal systems. In modern official CKMCK
The decimal system is thought to be based on our 10 fingers and thumbs
to many of us, the idea of a number is inseparable from its representation in language. It is difficult to think of the abstract concept of ‘two’, without calling to mind the actual word two. Our overarching sense of what two is includes both its numerical definition and the means by which this can be expressed to others. However, the connection between language and mathematics shows considerable variation between languages – perhaps indicative that the etymology of numbers is sometimes more arbitrary than it is organic. One of the fundamental principles that most languages are built upon is that of the base 10 or decimal system, a system for dealing with numbers attributed to our having 10 fingers and thumbs with which to count. In English words for the numbers 11-19, with ‘teen’, an older form of ten, being added to three, four, five, and so on. The oddities in this list, eleven and twelve, can be traced to similar origins in the old German words ein-lif and zwo-lif, lif meaning ten. Similarly, the words for twenty, thirty, forty and so on are derived from contractions of ‘two tens’, ‘three tens’ and ‘four tens’. However, the decimal system is far from universal. The most widespread alternative is that of the vigesimal, or base 20. Languages that use this system appear all around the world, from Mayan to Danish. As vigesimal is most likely based on counting with hands and feet, its adoption by diverse civilizations is not a surprise. Interestingly enough, the implementation of the base 20 system often comes with certain localised quirks and exceptions depending on the language. In the traditional form of Welsh for example, despite being a vigesimal system, 15 (‘pymtheg’) acts as a pivot number, with 16 being ‘un ar bymtheg’, literally ‘one on fifteen’. A similar scheme holds for 17 and 19 (‘two on fifteen’ and ‘four on fifteen’ respectively), but strangely enough 18 introduces a further exception, being ‘deunaw’ (‘two nines’) rather than ‘three on fifteen’.
French, the numbers up to 69 are based on decimal system. Sixty, for example, is ‘soixante’ (meaning six tens). After this point however they switch to a vigesimal system for 70 to 99 – seventy is ‘soixantedix’ or sixty-ten, and eighty is ‘quatre-vingts’ or four-twenties. The mixture is most probably due to an attempt to standardise to a decimal system from a formerly vigesimal one, which some scholars claim comes from French’s link with older Celtic languages, such as Welsh. While the two dominant systems present around the world happened to evolve (most probably) from the fact we have 10 digits on our hands and feet, this doesn’t mean that systems based on other physical attributes do not exist. An extreme example comes from the Oksapmin people of New Guinea, who use a base 27 counting system, derived from the names of the 27 different body parts they use for counting. They start with ‘tip^na’, or thumb, for one, going along their arm (six is ‘dopa’, or wrist), to their head (twelve is ‘nata’, or ear), before going down the other side (sixteen is ‘tan-nata’, or ear on the other side), ending with ‘tan-h^th^ta’, or pinky on the other side, for twenty seven. Whilst most refer to mathematics and science when they speak of the wonders of numbers, truly there is much complexity and beauty present in the language we use to express our numeric ideas. Matthew Dunstan is a 3rd year PhD student at the Department of Chemistry Arts and Science 25
The Women Behind the Science Sarah Smith considers the impact of the founding mothers of computer science
Science Museum London
The Analytical Engine was the first fullyautomatic calculating machine
26 Behind the Science
algorithm that would allow the Analytical Engine to compute a complex series of numbers: the Bernoulli numbers. In essence, she had written the first computer program long before a physical computer had even been invented. Unfortunately, Babbage’s Analytical Engine never came to fruition. But Lovelace got the recognition she deserved over a hundred years later, when Alan Turing, a World War II code breaker at Bletchley Park, referenced her ideas against the possibility of artificial intelligence, formalising her stance as ‘Lady Lovelace’s Objection’.
Go ogle
Google created a doodle for Ada Lovelace to celebrate her 197th birthday
there are a handful of well-known female scientists most of us will have heard of, including Marie Curie, Dorothy Hodgkin, and Florence Nightingale, all of whom contributed enormously to science and medicine today. But what about the unsung, female pioneers of computer science? Computer science, and science in general, is an area commonly considered to be dominated by men. But its technology is used by an enormous proportion of the whole population every day. What would we do without our laptops? The Internet? Or, God forbid, our smartphones? Computing technology has exploded over the last century to the point where many of us cannot comprehend life without it. Some people may be surprised to learn that a number of inspirational female scientists played their part in this story. The only legitimate daughter of poet Lord Byron, Ada Lovelace, was nudged into a career in mathematics by her mother – whom Byron left when Ada was only a month old. Ada’s mother, bitter about Byron’s betrayal, deliberately steered Ada away from the arts. Little did she know how this fork in the road would influence life today. Lovelace met Charles Babbage, a famous Cambridge mathematician, in 1833, and a fruitful friendship blossomed. Charles asked her to translate from French into English an article on the Analytical Engine, a theoretical ‘clockwork computer’, he was working on. At a time when women were not admitted to University (Girton College for women was only established in 1869) Lovelace was a keen mathematician, and she was daring enough to add some ideas of her own. Buried among them, in ‘Note G’, was an
During World War II, code breaking was also taking place on the other side of the Atlantic, and due to the large numbers of men on the front line, female mathematicians were drafted in to help. Agnes Driscoll was one of these women. The daughter of a music teacher, Driscoll studied mathematics and physics at Otterbein College from 1907-1909. She worked as a cryptanalyst during both World Wars, but her biggest achievement came in 1942. Six months after the attack on Pearl Harbour, the Japanese were planning another large attack, later known as the Battle of Midway. Driscoll and her team were instrumental in breaking the JN-25 cipher - the Imperial Japanese Navy’s most complex code used for all its military operations. This breakthrough ultimately led to Japan’s worst naval defeat in 350 years. Another woman to enlist in the US Navy in 1943 was Grace Hopper, affectionately known as ‘Amazing Grace’. Her PhD in mathematics from Yale University singled her out to work on one of the earliest computers, the Mark I. During this time Hopper and her team developed a strong belief that computer programs should be written in a language closer to English rather than the abstract machine code previously used. After the war ended, Hopper built on these ideas while working for the Eckert-Mauchly Lent 2014
Grace Hopper’s achievement of gaining her PhD in the early 1940s, when very few women were doing so. One strategy the scientific community is using to redress the gender balance in academia is to
Lent 2014
support the Athena Swan charter. This charter was set up in 2005 with the support of the Royal Society, the Biochemical Society, and the Department of Health. Universities are accredited Athena Swan awards based on their willingness and commitment to tackle both personal and structural obstacles that prevent women from moving from PhD programmes to permanent academic positions. These benefits include providing childcare vouchers, flexible working hours, and taking career breaks into consideration when assessing job applications. Although this is a step in the right direction, it is clear that employees are not solely to blame for gender inequality in higher roles; often fewer women apply for each position advertised so the odds are against a woman getting the job even if all CVs are equal. Having children isn’t always a consideration either, often a lack of confidence, rather than physical obstacles, prevent capable women from applying for jobs. This is clearly a societal problem, which will take time to fix and the addition of an addendum after the job advert ‘encouraging women to apply’ is unlikely to make a large impact. In many ways, both World Wars gave women, such as our founding mothers of computer science, a chance to prove themselves in what is commonly perceived as a man’s world, but work still needs to be done to redress the inequalities in senior scientific positions. These inspirational women and their achievements should encourage all of us to pursue gender balance in computer science and science as a whole, since it is clear that men and women can make powerful contributions that affect all of society.
Computer processors are now used in hundreds of everyday items
Grace Hopper is credited with inventing the first compiler and coining the phrase ‘debugging’
Athena Swan awards are given to universities committed to advancing women’s careers Athena Swan
PuBlic
Ioan Sameli
Computer Corporation, and invented the first compiler. A compiler is an overarching program that can translate instructions written in a particular programming language into the machine language that a computer’s processor uses. Hopper realised that this would allow programmers to write executable programs, which would vastly increase the number of applications this technology could be used for, particularly in the business sector. All of today’s modern compilers are derived from her original concept. These three women were headstrong and seemingly unintimidated by the men who surrounded them. Although the numbers are still skewed towards men, more women have taken on roles in the male-dominated worlds of mathematics and computer science. Figures from the National Science Foundation suggest that the number of women doing doctoral degrees in mathematics rose from 13 per cent in 1977 to nearly 30 per cent in 2006. However, as with the majority of basic sciences, the number of women in tenured positions is still depressingly low; in 2006 only 17.4 per cent of mathematics professors in the US were women. These figures further highlight
Sarah Smith is a 4th year PhD student at the Wellcome Trust Sanger Institute Behind the Science 27
Cost and Conservation Martha Stokes discusses why it pays to conserve biodiversity
joellehernandez
Activities such as illegal logging cause habitat destruction which can wreak economic havoc
28 Perspective
unregulated area of common ground shared equally by several sheep herders. One herder would quickly realise that if they put one more sheep on the common they could produce more wool. This herder would gain high private benefit with only a slight reduction in grazing for the other sheep. Soon, each herder would
arinnap
Maintaining bee species diversity is important for the pollination of many commercial crops
the loss of biodiversity is occurring at a rate which leaves conservationists reeling. 70 per cent of fisheries are overfished, tropical dry forests are in decline and extinction rates are faster than ever. But why is biodiversity worth conserving? Ethically, there are many arguments, some rooted in religion, others in culture. Many believe that organisms have an intrinsic value that cannot be determined financially. However, in our economically driven society it seems this is not enough. To convince those who need convincing, the monetary value of biodiversity needs to be calculated. Global markets such as food, biotechnology and ecotourism profit hugely by direct use of natural resources. $87 billion worth of natural products are sold each year in the US alone, many of which are unsustainably sourced. But is the cost of resource exploitation worth the profit? Research by Andrew Balmford and colleagues carried out in 2002 suggest it is far from it. They compared the total economic value of several ecosystems before and after conversion to farmland, and in every case they found that sites had greater monetary value before they were converted. One mangrove forest was worth a staggering 70 per cent more before conversion to a shrimp farm. When taking into account the indirect benefits of healthy ecosystems, ranging from nutrient cycling to the generation of rainfall patterns, their total value is estimated at a staggering $33 trillion. Conservation can also directly benefit commercial enterprise. In 2003 it was shown that the yield of coffee beans from the South American plants Coffea canephora and C. arabica was increased if there was a higher diversity of visiting bees. This occurred when the coffee plantation was close to a forest, where bee species diversity is highest. This finding suggests that the conservation of even a small patch of forest would significantly increase the value of the coffee harvest an attractive prospect for an industry worth around $70 billion. Despite this, deforestation is still a huge problem affecting South American rainforests where most of the world’s coffee is grown. Although the overall economic benefit of conservation is much higher than exploitation, these benefits are often external to the potential profiteers. For businesses, long-term benefits extending to the community provide less of an incentive than immediate private profit. The American ecologist Garrett Hardin described this social-private conflict as the ‘tragedy of the commons’. He imagined an
do the same, and the common resource would become depleted. In this way, selfish interest can act contrary to community benefit, such that, in the end, everyone is worse off. Another main problem is caused by ‘perverse subsidies’. These are sums of money given to failing industries by the government to help overcome their financial problems in the marketplace. Despite the economic and ecological adversity of these industries, costing a global estimate of $2 trillion annually across the top six most subsidised areas (farming, fisheries, forestry, energy, water and transport) a government that cut them would become very unpopular. Perverse subsidies, therefore, continue to fuel further investment into environmentally damaging practices. Around $6.5 billion is annually put towards conservation yet this covers only 7.9 per cent terrestrial land and 0.5 per cent marine area. Balmford and colleagues calculated that redirecting less than 5 per cent existing perverse subsidies could fund a globally effective conservation network. With a rapidly growing human population, it is essential that we think more carefully about how we can provide for our future. A change in policy and attitude is needed. Without conserving essential pollinators, rare plants that could cure diseases, or forests that generate rain patterns we might soon see the real cost of foregoing global benefit for private profit. Martha Stokes is a 3rd year Natural Scientist at Christ’s College Lent 2014
Vissago
Ashley Wilson is a 3rd year PhD student in the Faculty of Education.
m
rra
e Je
Luk
this glass sculpture depicts a T4 bacteriophage, a virus that infects Escherischa coli bacteria, once used as an alternative to antibiotics
in the former Soviet Union. Luke Jerram, the sculptor of the T4 bacteriophage, has created many different pieces representing viruses, as part of the Glass Microbiology collection. Jerram hopes to offer the public an accurate model of these virus particles, as they are frequently depicted as colourful, when in fact they are smaller than the wavelength of light. The sculptures are featured in museums around the world, including The Metropolitan Museum in New York, and are used as illustrations in textbooks and medical journals. Ashley Wilson is a 3rd year PhD student in the Faculty of Education. Lent 2014
Perspective Pavilion 29
Neurophilosopher Sergio Lainez Vicente catches up with science writer and former developmental neurobiologist Mo Costandi, author of the Guardian blog Neurophilosophy.
mo costandi
how did you decide to start a career as science writer? I’ve been a full-time freelance science writer for about five years. It all happened by accident. I got kicked off my Ph.D. and started working as a security guard where I was doing long shifts. It was very boring and so I set up a neuroscience blog to pass the time. People started reading it and eventually I started getting occasional emails from magazine editors, saying that they liked my blog and asking if I’d like to write something for them. I started getting more and more work, up to a point where I was earning two salaries, so I quit the security job. It was partly about being in the right place at the right time, because I set up the blog when the Internet was really beginning to transform journalism.
Mo Costandi, science writer and former neurobiologist
Which skills do you consider to be essential to become a successful science writer? There’s a tendency to think that you have to use very simple language with not much technical terminology, but that’s not necessarily true. I do think it’s very important to enjoy what you’re doing, and of course, it’s important to have some understanding of what you’re writing about. Do you prefer long posts where you have to do research or rather like shorter ones on a more direct way? Most of my blog posts are on average around one thousand words. I do quite a lot of reading around each post and usually include some background information. I like to go into a lot of detail and link to as many sources as I can. I also write news stories for Nature, Science and others, and I do enjoy that. Lately, though, I’ve started to enjoy it less and less, because it usually involves working to a tight deadline, and the stories are very short, so you haven’t got enough space to go into any detail. Now I’m focusing more on feature articles, which are much longer. There’s more room to go into detail. As a writer you get to have a broader view of science compared to people doing research who tend to focus on extremely specific areas. That’s one of the reasons I got kicked off my PhD.
30 A Day in the Life
I wasn’t really enjoying it because I was too worried about the technical details of my experiements. I lost track of the bigger picture and forgot why I had become interested in studying the brain in the first place. Writing the blog helped me rediscover my passion for the subject. I would say that I’m a thinker first and then a writer. I still want to understand how the brain works, and I have ideas that can be tested. In fact, I might be going back in the lab to collaborate on some experiments related to a book I’m researching. Which post has been the most important to your career? When I started my blog around eight years ago, I’d write short posts every day and then once a year I would spend a month or two researching and writing a much longer blog post. In my first year of blogging, I wrote a long article called The discovery of the neuron, about Camillo Golgi and Santiago Ramón y Cajal and how throughout the 19th century there was controversy about the fine structure of the nervous system. This debate went on for around fifty years until Cajal, and various others, finally established that the neuron is the functional and structural unit that makes up the nervous system. I really enjoyed writing that article. Around that time, [former Scientific American blogger] Bora Zivcovic came up with the idea of publishing a book of the 50 best science blog posts of the year, which eventually became an annual collection of the best blog posts, known as The Open Laboratory. I submitted the post and it was selected for publication in the first of these books in 2006. Some of my favourite writers also had blog posts in the book, and I was very pleased to have the post published in a book alongside theirs. That was a big deal for me, and although I didn’t realize at the time, it probably made me subconsciously think that I could actually be a professional science writer. Would you say that rigorous science blogs like yours are getting more influential and being considered as a part of the peer-review process by scientific publication groups? That’s not something I really think about very much. I certainly don’t think blogs can replace Lent 2014
Which peer review journal would you say is your favourite one? I wouldn’t say I have a favourite journal, but I always thought Cell is probably the best peer review journal there is, because the papers are so detailed. Going back to news stories versus features, a Science or a Nature paper only skims the surface of the subject, while a Cell paper gives you the whole story, including all the background. They describe every single experiment step by step, and it gives you the whole picture whereas a Nature or Science paper doesn’t do that. Is there a good match between posts you think are interesting and the ones that become popular? There’s no way to predict what’s going to be
Features
dan__dan Jignesh tailor
the peer review process, but they can make very important contributions to the discussion. This is what some people are calling post publication peer review. A perfect example of this is what happened in 2010 with a Science paper published by NASA researchers, who claimed to have isolated a bacterium that substitutes arsenic for phosphorus to synthesize their DNA. Right after the NASA press release, people immediately started discussing it on their blogs and Twitter, saying that it looked suspicious. The microbiologist Rosie Redfield wrote a post on her blog RRResearch a week after the paper was published, saying that it didn’t present any convincing evidence that arsenic had been incorporated into DNA she wasn’t convinced by the findings. One and a half years later, two papers back to back got published in Science (one from Redfield herself ) refuting the claims from the original article. It can take a year or more for scientific research to be published in the traditional way, but the Internet allows this kind of thing to happen far more quickly.
popular. I can spend days or weeks researching a long article and it won’t even get noticed, or I could find a cool picture and spend a couple of minutes putting it on the blog, and it’ll get tens of thousands of hits. My most popular posts have been about very morbid subjects, like trepanation, or unusual penetrating brain injuries. People seem to have a real fascination with these gruesome things. Of course, I want all my posts to be popular, but they’re not. It’s almost impossible to predict which ones will be. The Internet is a very funny place.
Neurophilosophy focuses on research in the field of neuroscience
Do you think blogs will eventually replace traditional print journalism? No, I don’t think blogs will ever completely replace traditional media. I think we’re in a very uncertain transition period, and the two have been merging for a long time. People will keep on finding new ways of combining different media to produce good science content. Dr Sergio Lainez Vicente is a research associate at the Department of Pharmacology
References:
Genomes from Beyond the Grave - http://www.nature.com/news/2011/111025/full/478444a.html Making Sense of the Senses - http://www.nature.com/scitable/topicpage/gpcr-14047471 Bubbles of Trouble - http://www.sciencemag.org/content/289/5487/2114.abstract Finding a Cure for Black Bone Disease - http://www.akusociety.org/ Tapping into New Water Sources - http://www.pub.gov.sg/
Regulars
No Monkey Business - http://www.tn.gov/tsla/exhibits/scopes/index.htm Counting Out Loud- http://www.sf.airnet.ne.jp/ts/language/number.htm Designer Babies - www.hfea.gov.uk The Women Behind the Science - http://www.athenaswan.org.uk/ T4 phage - http://www.lukejerram.com/glass/ Not Just Neuroscience - http://www.theguardian.com/science/neurophilosophy Lent 2014
A Day in the Life
31
Weird and Wonderful A selection of the wackiest research in the world of science Help, I need a cockroach! remote-controlled cockroaches
might sound like a cartoon super-villain plan, but far-fetched as it seems, the technology is real. Developed by a team at North Carolina State University, it uses motion-sensing Microsoft Kinect system, originally developed for Xbox. So what’s the purpose behind these cyborg cockroaches? Well, other than trying to freak out your grandmother or make the first invertebrate chorus line, the researchers hope to use them to map unknown environments where GPS technology cannot be used, such as collapsed buildings. Cockroaches are ideal for this because their movement is essentially random. The wired-up cockroaches (known as biobots) would be let loose at the scene, and allowed to wander. Radiosignals would be sent to the researchers every time two biobots got close to each other. By commanding them to find and follow walls, go back to random movement, and repeat, an algorithm can be used to translate the biobot data into a rough map of the environment. Chemical and radiation sensors could also be attached to help inform of possible hazards. Who knows? Biobots could soon become the next emergency service. ns
The Mpemba effect hot water freezes faster than cold water. Although this phenomenon has been observed for centuries, researchers have finally explained why. The so-called “Mpemba effect” was first described by Aristotle in the 4th century BC, and later by Francis Bacon and Rene
32 Weird and Wonderful
Descartes. It is named after the Tanzanian student, Erasto Mpemba who, in 1969, published the observation that ice cream mix freezes faster when it’s warm. Why this phenomenon occurs has long been a conundrum but recent work from Xi Zhang’s team at the Nanyang Technological University in Singapore seems to suggest the secret lies in the water’s molecular interactions. Each water molecule is formed by two atoms of hydrogen and one of oxygen united via strong covalent bonds. Water molecules interact with each other via weak attractive forces called hydrogen bonds. Zhang’s idea is that hydrogen bonds generate repulsive forces between water molecules by pulling them close together. This stretches the covalent bonds between atoms and makes them store energy. As water warms up, the hydrogen bonds loosen up, allowing the covalent bonds to shrink and release the stored energy—a process equivalent to cooling. When warm water is exposed to freezing temperatures, this extra cooling mechanism acts in addition to the conventional one, causing a faster rate of freezing. odg
Left wag, right wag a new study from two Italian universities indicates dogs can tell the difference between left and right. It has long been known that the direction of a tail wag can provide important information, with rightward saying “come and play”, and leftward warning “I want to bite you”. However, whether or not such gestures are used for communication between our canine friends was not shown until last November. Scientists measured the heart rates of 43 pooches in response to wagging video clips, and found that they were significantly higher for leftward wags. This indicates an ability to detect aggression from tail wag direction alone. The implications of this research go beyond making it that little bit more embarrassing if you are someone who confuses left and right, as it provides supporting evidence for the fundamental asymmetry of canine brains. The left side, which instructs muscles on the right, is thought to be more associated with “approach” behaviours, while the right controls the left side of the body and is linked to “withdrawal”. The extension of this theory to behaviours triggered by social cues, such as exhibiting stress reactions when watching a nasty leftwagger, seems to suggest that the need for communication played an important role in the evolution of brain lateralisation. es
Illustrations by www.alexhahnillustrator.com
Lent 2014
Write for
Feature articles for the magazine can be on any
We need writers of news, feature articles
scientific topic and should be aimed at a wide
and reviews for our website.
audience. The deadline for the next issue is
For more information, visit
31st January 2014
www.bluesci.org
Email complete articles or ideas to submissions@bluesci.co.uk
For their generous contributions, BlueSci would like to thank: Cambridge Physics Department Cambridge School of the Biological Sciences Jesus College Queen’s College CSaP If your institution would like to support BlueSci, please contact enquiries@bluesci.co.uk