The Cambridge University science magazine from
www.bluesci.co.uk
FOCUS Intelligence
Sex . Zoonotic Diseases . Ageing Alan Turing . The Royal Society . David Clary
9 771748 692000
ISSN 1748-6920
22 >
Cambridge University science magazine
Lent 2012 Issue 23
Lent 2012 Issue 23
Cambridge University science magazine
Contents
Features 6
Regulars On the Cover News Reviews
The Need for Sex Anna Wilson looks at the evolutionary benefits of sexual reproduction
8
Neglecting Vets
Perspective
Peter Moore explores the importance of Veterinary Medicine for mankind 10
The Eccentric Engineer
Science and Policy
Sarah Amis looks into the life of one of the world’s most innovative, yet troubled, inventors 12
History
Aspects of Ageing
16
FOCUS Intelligence BlueSci looks at the science of human intelligence: how do we test it, what controls it, and how do we even define it?
About Us... BlueSci was established in 2004 to provide a student forum for science communication. As the longest running science magazine in Cambridge, BlueSci publishes the best science writing from across the University each term. We combine high quality writing with stunning images to provide fascinating yet accessible science to everyone. But BlueSci does not stop there. At www.bluesci.co.uk, we have extra articles, regular news stories, podcasts and science films to inform and entertain between print issues. Produced entirely by students of the University, the diversity of expertise and talent combine to produce a unique science experience.
24
26
Nicola Stead takes a look back at the origins of the Royal Society and its founding members
Behind the Science
Andrew Szopa-Comley explores possible explanations for why humans age
22
Beth Venus discusses the future of manned space missions
Warning: Contains Peanuts Mrinalini Dey investigates our attempts to alleviate the anxiety of allergy sufferers
14
Tom Bishop discusses carbon dioxide capture as one solution to climate change
3 4 5
28
Jordan Ramsey explores the persecuted genius of computing pioneer Alan Turing
Arts and Science Matthew Dunstan investigates the role of science fiction in shaping science fact
A Day in the Life Ian Le Guillou interviews David Clary about the role of science in foreign policy decision-making
Weird and Wonderful
30
31
32
Committee President: Tim Middleton ....................................president@bluesci.co.uk Managing Editor: Tom Bishop ................ managing-editor@bluesci.co.uk Secretary: Jessica Robinson .............................. enquiries@bluesci.co.uk Treasurer: Wendy Mak .................................. membership@bluesci.co.uk Film Editor: Sita Dinanauth ..........................................film@bluesci.co.uk Radio: Anand Jagatia................................................ radio@bluesci.co.uk Webmaster: Joshua Keeler ............................. webmaster@bluesci.co.uk Advertising Manager: Richard Thomson .......... advertising@bluesci.co.uk Events & Publicity Officer: Helen Gaffney .... submissions@bluesci.co.uk News Editor: Louisa Lyon.........................................news@bluesci.co.uk Web Editor: Jonathan Lawson..........................web-editor@bluesci.co.uk
Contents
1
Issue 23: Lent 2012 Editor: Felicity Davies Managing Editor: Tom Bishop Business Manager: Michael Derringer Second Editors: Aaron Barker, Wing Ying Chow, Matthew Dunstan, Helen Gaffney, Ian Le Guillou, Leila Haghighat, Joanna-Marie Howes, Anand Jagatia, Sarah Jurmesiter, Haydn King, Nicola Love, Tim Middleton, Vicki Moignard, Alexey Morgunov, Lindsey Nield, Laura Pearce, Amelia Penny, Ted Pynegar, Jordan Ramsey, Jessica Robinson, Sandra Schieder, Liz Ing-Simmons, Richard Thompson, Divya Venkatesh, Beth Venus Sub-Editors: Helen Gaffney, Leila Haghighat, Nicola Love, Tim Middleton, Jordan Ramsey News Editor: Louisa Lyon News Team: Javier Azpiroz-Korban, Stephanie Boardman, Ian Le Guillou Reviews: Matthew Dunstan, Leila Haghighat, Vicki Moignard Focus Team: Helen Gaffney, Liz IngSimmons, Ted Pynegar, Jessica Robinson, Weird and Wonderful: Mariana Fonseca, Nicola Love, Jordan Ramsey Pictures Team: Leila Haghighat, Nicola Love, Jordan Ramsey, Mrinal Singh Production Team: Ian Le Guillou, Leila Haghighat, Nicola Love, Tim Middleton, Jordan Ramsey, Mrinal Singh Illustrators: Alex Hahn, Dominic McKenzie, Katherine Wakely-Mulroney Cover Image: Gengshi Chen
ISSN 1748-6920
Varsity Publications Ltd Old Examination Hall Free School Lane Cambridge, CB2 3RF Tel: 01223 337575 www.varsity.co.uk business@varsity.co.uk This work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported License (unless marked by a ©, in which case the copyright remains with the original rights holder). To view a copy of this license, visit http://creativecommons. org/licenses/by-nc-nd/3.0/ or send a letter to Creative Commons, 444 Castro Street, Suite 900, Mountain View, California, 94041, USA.
2 Editorial
The Age of Access SCIENCE HAS come a long way since C.P. Snow gave his
1959 Rede lecture: “The Two Cultures”. In this lecture Snow suggested that the divide between arts and sciences could only hinder progress, and that we needed real communication between them to develop as a civilised society. Much has changed since then. We are now provided with a plethora of scientific information on a daily basis: from the traditionally arts-based mainstream newspapers to the Internet, where a detailed answer to any question is just a mouse click away. But is this something to be celebrated? The media’s ‘misrepresentation of science’ is often a source of complaint. The melodrama surrounding the building of the Large Hadron Collider, and the suggestion that the colliding of particles would result in a black hole, was regarded by many scientists as irresponsible reporting. But in her talk for BlueSci last term, Vivienne Parry suggested that we cannot blame the media for all exaggerations. Journalists have to find an exciting angle from which to report a story or the public won’t be interested in reading their newspaper. And yet, the problem is not with the intention to communicate—it’s what is communicated. Perhaps the solution would be for a higher level of scientific awareness among the public, allowing more people to distinguish between scientific news and scientific hype. Brian Cox’s new book, The Quantum Universe, has been written on the premise that everyone can understand quantum mechanics—a bold claim. An improvement in the public understanding of science can only force the accuracy of science reporting to improve, which in turn will only further our collective knowledge. If nothing else, we can hope that it inspires a new generation of scientists to enter laboratories. To this end, BlueSci has its own communication projects. This diverse issue covers everything from sex and the reasons for its evolution, to the life of Turing on the 100 year anniversary of his birth. Our Focus looks at intelligence; what it is, whether it is hereditary and, most perplexing of all, why us? Meanwhile, BlueSci film and our very own radio show on Cam FM 97.2 are continuing this term. Further information about these and other BlueSci activities are available on our website: http://www.bluesci.org/. As ever, if you find yourself wishing that you’d been involved in the creation of this magazine, please get in touch about working on the next issue. We’d love to hear from you.
Felicity Davies Issue 23 Editor
Lent 2012
Creating Crystals Lindsey Nield looks into the story behind this issue’s cover image
Lent 2012
light passes through two polarising lenses oriented at right angles with the sample in between. If the sample is birefringent, it will rotate some of the light, allowing it to pass through the second polariser. The intensity of the transmitted light depends on the orientation of the crystal and this provides the necessary contrast to pick out the protein crystals in the droplet. The particular protein under investigation in this case was a Class I major histocompatibility complex (MHC). Proteins of this class are found on virtually all cell types in the body and perform a vital role in the immune system by presenting short fragments of proteins from within the cell to white blood cells from outside. This allows the immune system to identify abnormal body cells, such as those infected with viruses or those that have turned malignant, and destroy them with white blood cells. Complexes formed by the MHC protein and foreign protein fragments with a more stable structure can survive longer on the cell surface, initiating a stronger immune response. By growing crystals such as those in the cover image and analysing the structure of the protein, the relationship between the stability of these proteins and their molecular structure can be investigated. In the future, this could potentially lead to the design of tumour vaccines capable of inducing an immune response against cancer cells. Lindsey Nield is a PhD student in the Department of Physics
Jamie Gundry
Another of Gengshi Chen’s crystals of the MHC protein
single human hair? In the case of proteins, the answer is often by painstaking trial and error. Proteins can form crystals just like other molecules when placed in the appropriate conditions. But proteins are fickle: each individual protein requires a unique and unpredictable set of conditions for successful growth. Many systems are set up simultaneously to explore the necessary conditions, varying parameters such as temperature, pH and the concentrations of protein and precipitant. The method most commonly used for protein crystallisation is hanging drop vapour diffusion. This was the method used by Gengshi Chen, a third year Biochemistry student at Selwyn College, to grow the crystals that appear in the droplet shown in this issue’s cover image. In this technique a droplet containing purified protein, buffer and precipitant is placed on a glass slide. This is then hung over a reservoir of buffer and precipitant in higher concentrations to create a closed system. In order to attain equilibrium, water vapour slowly leaves the droplet and transfers to the reservoir. This causes the precipitant concentration in the droplet to increase to a level where crystallisation can occur. But why do we need protein crystals? The aim of the crystallisation process is to grow a well-ordered crystal that is pure enough and large enough to be used for X-ray diffraction experiments. The resulting diffraction pattern can then be analysed to discern the structure of the protein. The star-like aggregations in the cover image were observed approximately two days after initiating crystallisation. The final crystals, measuring just 50 to 100 micrometres across, were grown in five days. These were then mounted on nylon wire loops and stored in liquid nitrogen for transport to the European Synchrotron Radiation Facility (ESRF) in Grenoble for X-ray diffraction experiments. Images of the protein crystals were also taken using polarised light microscopy as part of the Amgen Scholar Program at the Karolinska Institutet in Stockholm. This technique exploits a property of crystals known as birefringence. In birefringent materials, light travels at two different speeds depending on the direction in which it is travelling, and therefore splits into two separate beams when it enters the material. In polarised light microscopy,
Gengshi Chen
how do you grow perfect crystals the size of a
On the Cover 3
News
Check out www.bluesci.co.uk or @BlueSci on Twitter for regular science news and updates
Counterfeiters beware!
scientists are one step closer to
IMAGES_OF_MONEY
TELEMACHUS
…and this little piggy corrected mutations
researchers at the
Chinese Academy of Sciences in Changchun have developed new nanocrystals that promise to make life difficult for prospective counterfeiters. These fluorescing nanocrystals are suitable for use in anticounterfeiting ink because they appear as different colours under different types of light. They display both upconversion, whereby one photon is absorbed and then another is emitted at a higher frequency, and downconversion, whereby a high energy photon is absorbed and emitted as two lower energy photons. Under infrared light, the nanocrystals undergo upconversion, causing them to appear green. However, when ultraviolet light is used, the crystals undergo downconversion and appear blue. This dual emission makes ‘nanocrystal ink’ more difficult to replicate than anti-counterfeiting inks currently in use, which have only a single colour. During synthesis, the nanoparticles were bound to an organic acid to make them soluble in organic solvents so that they could be stamped onto paper. The group’s next challenge is to improve the efficiency of their nanocrystals by increasing the number of molecules active in the photon conversion. This would enhance their ability to compete with traditional organic dyes. DOI: 10.1039/c1nr10752f sb
being able to correct mistakes in our DNA, thanks to a newly developed method for rewriting mutations in stem cells. In a recent Nature publication, researchers used piggyBac, a ‘jumping gene’ that can move along DNA, to insert a corrected gene with remarkable precision. Kosuke Yusa and a team at the University of Cambridge focused on using piggyBac to treat a genetic liver disease caused by deficiency in the protein alpha 1-antitrypsin. This deficiency is due to a genetic mutation, found in one in 2000 northern europeans, causing the protein to accumulate in the liver and cause scarring. Yusa’s team converted patient skin cell samples into stem cells and corrected the mutation using piggyBac. The stem cells were then converted into liver cells with a genetic code that could produce the correct form of the protein. Unfortunately, producing stem cells in this fashion introduces other mutations into the DNA. Minimising the risk of new mutations is therefore necessary before the technique can be used to treat patients. Currently, the only cure for alpha 1-antitrypsin deficiency is a liver transplant. With long transplant waiting lists, the use of piggyBac brings new hope to thousands of sufferers. DOI: 10.1073/pnas.1018740108 ilg
Three new planets and a mystery object Wolszczan—the first person ever to detect planets outside our solar system—has recently discovered three new planets. According to the researchers from Penn State University, each of these planets orbits its own giant star. Intriguingly, however, the stars at the centre of these miniature solar systems are dying. By following these three stars, the researchers hope to discover more about the ways in which solar systems evolve as their stars die. One of the stars is accompanied by a gas giant not too different from Jupiter and an additional mystery object. The research team has speculated about the identity of the object, with suggestions ranging from a small planet to one 80 times the size of Jupiter. The researchers will
4 News
continue to watch the object carefully over the coming years in the hopes of discovering its identity. Ultimately, these new discoveries should pave the way for a better understanding of the events that occur as a star dies. They also offer clues as to the fate that awaits our own solar system, albeit in some five billion years time. www.sciencedaily.com/ releases/2011/10/111027132502 jak Yuri Beletsky
an international research team led by Alex
Lent 2012
Reviews The God Species - Mark Lynas
Fourth Estate, 2011 £14.99
CELEBR ATING OUR SPECIES ’ technological achievements, in his latest book Mark Lynas takes a fresh look at the state of our planet and sets out a global plan of action to save the Earth from environmental disaster. Lynas’s The God Species introduces the concept of ‘Planetary Boundaries’, which outline ‘safety zones’ in which we can live without irreparably damaging the planet. These nine boundaries all relate to the global systems most at risk from our activities. Some, like climate change, have already been crossed, whilst others, such as ocean acidification, will also be passed before long. While the idea of the ‘Planetary Boundaries’ is itself new, it is in Lynas’s strategy for keeping within them that The God Species shines. Sensing that the green ideology of cutting consumption is faltering, his vision imposes no limits on human activity but instead actually encourages growth within the limits imposed by the boundaries. While his strategies often promote the controversial, such as nuclear power and genetic modification, his supporting arguments are meticulously reinforced with peer-reviewed studies and real-world examples. With a refreshingly optimistic outlook, The God Species is an impassioned plea to use the knowledge and technology that 3.7 billion years of evolution have equipped us with to build a better and greener future. VM
The Origins of AIDS - Jacques Pepin THIRTY YEARS HAVE now passed since the first reported cases of AIDS in 1981, but in
CUP, 2011 £17.99
Jacques Pepin’s The Origins of AIDS, the events of 1981 are seen as the culmination of a much older history. Using epidemiological data, Pepin carefully pieces together the most plausible explanation for how HIV first entered, and then spread, within the human population. While debunking myths surrounding the cross-species transmission of AIDS, Pepin provides definitive evidence for his own explanation. After hunters in the Belgian Congo became infected with a primate form of HIV in 1921, the mass health campaigns of European colonists spread the virus through dirty needles used for intravenously injecting antibiotics. Pepin then charts the leap of HIV from the Congo to Haiti and its subsequent penetration into the US through sex tourism and the blood trade. The chronological order of Pepin’s book, beginning decades before the 1980s epidemic, and its ability to contextualise the biology of AIDS within a historical and sociological framework, make it a crucial read for understanding how a series of unlikely circumstances could kill 29 million people within just 30 years. LH
Powering The Future - Robert B. Laughlin
Basic Books, 2011 £17.99
Lent 2012
IN MOST PUBLICATIONS concerning the combined issues of climate change and energy, it can be hard to find solid ground upon which to survey the future ahead. In Powering the Future, a new book by Nobel Prize-winning physicist Robert B. Laughlin, this task is made simple by the clearest of scientific and economic arguments. Very early in the book, Laughlin skilfully separates the problems of climate change and energy resources. By doing so, he avoids the emotional baggage that the former often entails to give a detailed exploration of where our energy will come from in 200 years. His bottom line is that no matter what our personal feelings might be towards nuclear energy or green technology, the majority of us will choose the cheapest option. He proceeds to examine the options, ranging from the possible industrialisation of desert areas for solar energy to a future that may even see energy production located in the deepest reaches of the sea. The great strength of Laughlin’s book is that it eschews emotionalism for cold reason; an approach that is immensely useful for those who not only want to understand the options for energy production but who also wish to calmly discuss the future of a post-carbon world. MD Reviews 5
TH KA ER IN AK EW -M ELY UL N RO EY
The Need for Sex Anna Wilson looks at the evolutionary benefits of sexual reproduction for greenfly, sex is an exceedingly rare event.
JAMES LALNG
Greenfly employ a rather bizarre reprodution strategy
When adults first emerge in the spring, the entire population is composed of females. Soon, without any fertilisation, an embryo develops inside each of these females. Then a granddaughter grows inside the embryonic daughter. Throughout the summer, greenfly reproduce in this Russian doll-like manner, with each generation only surviving for about a month. The coming of autumn signals the end of the breeding season: for the first and only time, males are produced alongside females. These offspring subsequently mate, forming ubiquitously female eggs, which lie dormant for the winter, ready to give rise to the mothers of the following spring. Greenfly employ a rather bizarre strategy, but most animals reproduce sexually. In purely probabilistic terms, however, we would not expect it to be so. In Ridley’s hypothetical scenario, four individuals are trapped in a cave. Two are sexual men; one is a normal female, and the last, a mutant female who can reproduce asexually, called a parthenogen. Given that humans usually reproduce monogamously one of the males fails to find a mate and, consequently, his genes fail to perpetuate. In the first round of reproduction, each female produces two offspring: two female
6 The Need for Sex
parthenogens for the asexual female and a sexual boy and girl for the sexual female. Subsequently, each female from the second generation reproduces, producing two offspring in the same patterns as before. If the first (founding) generation dies, the cave will be populated by six parthenogens, accompanied by two sexual females and two sexual males. In other words, the proportion of parthenogens has increased from twenty five percent, to sixty percent, in only three generations. The rapid rise of parthenogens in Ridley’s cave illustrates how asexual reproduction is far more efficient and reliable than sexual reproduction; moreover, asexual reproduction guarantees an organism’s genes’ delivery into the next generation. Why, then, has sex, in many cases, proved to be evolutionarily advantageous? The existence of parasites provides the most compelling hypothesis. Ever since life bloomed, organisms have struggled against pathogens. For example, viruses, such as measles, invade cells, capture their machinery, replicate, and kill the cell directly, or target it for destruction by the host’s immune system. Infection is energetically expensive and may result in the organism’s death or sterility. A computer model, constructed by evolutionary biologist Bill Hamilton, illustrates the advantages of sex. Like in Ridley’s cave, asexual ‘species’ held the upper hand, that is, until parasites were introduced. In the presence of parasites, sex won the game outright. Under pathogenic pressure, the simulated sexual ‘species’ evolved new ways to shut out and annihilate the parasites, far more readily than the asexual organisms. Pathogens were no longer able to sweep from parents to offspring, to siblings and to cousins, without modification. The resistance of sexual species to pathogen colonisation is the most likely driving force for their otherwise surprising evolution. Both in reality, and in Hamilton’s simulation, the success of sex is dependent on genetic mixing. When sex cells are produced, genes hop between the organism’s own chromosomes, and fertilisation produces offspring which are syntheses of both parents’ genetic material. This random trading of Lent 2012
Lent 2012
Fertilisation of slime mould occurs within a complex hierarchy
DENNIS BARTHEL
DNA clothes each individual’s cells in a unique combination of proteins, which act as cellular locks. Like assigning different codes to your house alarm, computer password and credit card pin, such a system minimises the risk that an intruder will wipe out all of your possessions in one fell swoop. For genes housed in large species, sex is a crucial security procedure; but, many small species do well without it. Unlike us, truly asexual species, such as the Bdelloid Rotifer, a type of mite, reproduce rapidly and often endure torturous phases in their lifecycle to escape extinction. When an organism copies its DNA to pass on to its offspring, mistakes are inevitable. Reproducing rapidly means that these mutations accumulate relatively quickly with respect to time. This way the locks are changed sufficiently frequently to keep the parasites out. Additionally, in periods of environmental stress, Bdelloid Rotifers dry up and float on the wind, before rehydrating in a new location with more favourable conditions. The most convincing explanation for such behaviour is that, like sex, this inhospitable ritual has evolved as a mechanism to escape disease. Although many organisms have adopted sexual reproduction in order to minimise the impact of parasites, not all species have distinct sexes. Bacteria can reproduce sexually, via a mechanism termed conjunction. DNA is rapidly transferred through a narrow pipe, which forms between the two interacting cells. Although one bacterium seems to prepare itself for receiving, and the other adapts for giving, all cells can perform both roles, so no sexes as such exist. By contrast, all organisms which reproduce via the fusion of gametes seem to require distinct sexes. The reason for this is probably the result of a lasting pre-historic interaction. Millions of years ago, bacteria entered the cells of larger organisms and forged a symbiotic relationship. The new, cellular bacteria evolved into organelles—mitochondria and chloroplasts—providing their hosts with energy and benefitting, in return, from nutrients and a stable environment. Organelles, like individual cells, contain their own DNA. If an organelle detects the presence of other organelles, which are genetically different to themselves, a conflict may ensue—much as your body attacks a transplanted organ. This can have fatal consequences for the host cell. To circumvent this problem males contribute the small, mobile, organelle-free gametes; whilst females are the source of large, relatively static sex cells, which supply organelles to the embryo. In this manner, the male sacrifices his organelles’ dynasty in order to better ensure the perpetuation of his own nuclear genes into future generations. Distinct sex cells also exist in the plant kingdom; however, individuals tend to be hermaphrodites,
containing both male and female parts. The immobility of plants, and their dependency on vectors to transport pollen, makes finding mates a risky business. Being able to mate with any species member is therefore a significant advantage. Similarly for the slime mould having a large pool of potential mates seems to be important. In this species no fewer than thirteen sexes exist. Fertilisation occurs within a complex hierarchy which enables individuals to mate with twelve thirteenths of the population. Most animals, on the other hand, are divided into two discrete genders. Animals benefit from much greater mobility, than either plants or slime mould, making finding a partner much less of an issue. Furthermore, the system of reproduction in slime mould is so complex that it is prone to errors, which result in the death of offspring. So, it seems that the greenfly have it right. The summer period of rapid asexual reproduction enables an efficient and reliable proliferation of genes: one founding mother can produce thousands of descendants in a single summer. Meanwhile, the sexual phase of reproduction prior to the winter latency, reshuffles the season’s genes, kicking out the majority of pathogens which have evolved over the summer. In this way, mass reproduction is achieved without compromising the breeding success of the following year. Although such a strategy is extremely effective for greenfly with their rapid reproductive cycle, larger organisms have longer reproductive intervals. In the face of rapidly evolving parasites, these slow-maturing organisms require a diversity boost to keep disease at bay. This is the need for sex. Anna Wlison is a 2nd year undergraduate in the Department of Veterinary Medicine The Need for Sex 7
ENZIE DOMINIC McK
Neglecting Vets Peter Moore explores the importance of Veterinary Medicine for mankind stepping into the yurt, the odour of mutton
flesh and skin is overwhelming and the smoky fire does little to dull your senses. In the corner lies Ganzorig, his wife Goyo tending to his undulating fever while their children sit nearby, watching their father writhe in pain. The fever is not fatal, but it can last for many months and is exacerbated by incessant arthritic and muscular pain. Ganzorig is suffering from brucellosis, a bacterial disease that has become a growing problem in Mongolia, but one which the world seems unmotivated to eradicate or even control. Brucellosis is a zoonotic disease—one that can be transmitted from animals to humans (and occasionally vice versa). Over 60 per cent of the pathogens that are infective to humans are zoonotic. The World Health Organisation (WHO) has classified brucellosis as a ‘Neglected Zoonotic Disease’ (NZD). What makes a disease ‘neglected’? There are many factors but a general rule-of-thumb is that their impact appears, to most, to be insignificant. But with 40 per cent of the world’s poor dependent upon making a living from agriculture and at risk from zoonotic infections, can we afford to continue
N.DENORMANDIE/OIE ©
A vet from the World Organisation for Animal Health inspects livestock in Mali
8 Neglecting Vets
the neglect and hope that these diseases will eventually disappear? A 2006 WHO report suggests that because NZDs cross the boundary of human and animal medicine, we struggle to compartmentalise them, with neither side accepting responsibility. As a result, countries which already have poor medical infrastructure further marginalise the disease burden. Poor communication between the veterinary and medical communities leads to no real appreciation of the impact and consequently there are no programmes for control. The result is a neglected disease that continues to spiral out of control. Jim Scudamore, of the Integrated Control of Neglected Zoonosis (ICONZ), suggests that as well as having no real knowledge of disease incidence (the occurrence over time), there is no information on the costs to animal production or human health, nor any indication of the most successful methods of control. Jim is a veterinary surgeon specialising in Veterinary Public Health—the application of veterinary science to ensure the health and wellbeing of humans—and believes that vets have a broad range of expertise to deal with the animal side of the NZD equation, but cannot solve it on their own. In the ICONZ project he assesses the incidence of disease before intervention, then an intervention to control it, followed by measurements to see the outcome. From the results, ICONZ will prepare policy papers on costs and benefits and attempt to persuade policy makers to take action. So are we missing a key member of the team? Is there an unseen player that could make a real and lasting difference? Perhaps veterinary surgeons are the missing piece in International Development. By treating animals, can we truly treat people? Ganzorig is likely to have been infected from one of the small flock of sheep that he farms to supply Lent 2012
STEVIE MANN
A shepherdess tends her flock near Taipusi in Inner Mongolia, where brucellosis is becoming a growing problem
his family with meat and wool, and which in hard times can be sold to keep the family above the poverty line. Brucellosis was almost eradicated from Mongolian animals in the 1980s, but following political turmoil with the Soviet Union in the 1990s, has reared its ugly face once more. If the WHO had been able to continue vaccinating animals under Soviet rule, perhaps Mongolian livestock and wildlife would be disease-free. WHO see the control of neglected diseases as a real and cost-effective opportunity for alleviating poverty and include it in the plan to achieve Millennium Development Goal 6—eradication of disease. Why, then, don’t aid-donating countries, such as the UK, focus on programmes linked to NZD control? A call to Dr Alex Thiermann from the World Organisation for Animal Health (which goes by its French acronym OIE) sheds some light. “Many of these diseases have been eradicated in the developed world and there appears little incentive for us to help control them elsewhere, yet they continue to be a major issue. Vets can play a role in communicating the importance of NZDs to the world and also in helping design, implement and assess control programmes.’’ Now that the veterinary profession is over 250 years old, having begun with the world’s first vet school in Lyon, we should look to vets to help build the infrastructure and programmes needed to alleviate rural poverty. The days of James Herriot style vetting have gone and the world has changed; vets can now play a key role in helping humans as well as animals. When I tell people I am a veterinary student their reaction is invariably that it must be hard to treat an animal if it can not tell you where it hurts. This is when the most important principle of veterinary medicine comes into play: Lent 2012
communication. If we can liaise with our clients and get a good history leading up to the illness, we can get a rough diagnosis. “Vets aren’t the pioneers of good communication!” continues Thiermann, “OIE are developing a project with the One Health concept to illustrate the multifaceted work of veterinarians called ‘Vets in Daily Life’.” One Health aims to bring together the disciplines of human and veterinary medicine with environmental science. It appreciates the interface between people and animals (both domestic and wild) and the need to look at all three areas to create one plan that will help both people and animals. In common with any development package, money is a necessity. Aid agencies need to look afresh at where they target their resources and determine whether NZD control really can provide a cost-effective and lasting means towards alleviating poverty. Initially a legal and political framework needs to be implemented, followed by infrastructure and finally, programme delivery. But few, apart from a small section of WHO and independent NGOs, have set NZDs as a priority. These diseases are not ‘sexy’; the epidemiology is complex involving people, wildlife and domestic animals all at one interface. Their devastating consequences are not easy to solve and they are killers of animals, people and communities. We must first identify the communities suffering from these hidden diseases and provide effective treatment. Motivation for action must come from inside the medical and veterinary world. Only by working together in a consistent manner will we ensure that Ganzorig’s children will not face the same fate as their father. Peter Moore is a 5th year student in the Department of Veterinary Medicine Neglecting Vets 9
The Eccentric Engineer ENZIE DOMINIC McK
Sarah Amis looks into the life of one of the world’s most innovative, yet troubled, inventors most of us know the name Thomas Edison, the prolific American inventor who produced the first practical electric light bulb. Why then, do relatively few know of Nikola Tesla, Edison’s Serbian-American arch-nemesis, and the years-long, bitter rivalry between the two—one which may have ultimately cost them both the Nobel Prize? Tesla was born in 1856, an ethnic Serb in what is now part of Croatia. While his mother invented gadgets for use around the house, including a mechanical eggbeater, his father was an Orthodox priest who was adamant his son was to follow in his footsteps and enter the priesthood. The young Tesla was already showing early signs of brilliance, including baffling his teachers by performing integral calculus in his head, and was equally adamant that he
pulsepowernow.com
Nikola Tesla, born in 1856, was one of the most prolific inventors of his age
10 The Eccentric Engineer
would study engineering at the renowned Austrian Polytechnic School. He would later describe the feeling of being “constantly oppressed” by his father’s ambitions. Clearly, someone would have to give in, and that turned out to be the older Tesla: when Nikola contracted cholera at age seventeen and his life hung in the balance, his father promised that if he survived, he would get his wish and attend the Austrian Polytechnic. Despite this hard-won struggle to be allowed to attend, Tesla did not receive a degree from the university, leaving in his third year. In 1878, the year he left, he broke off contact with friends and family and moved to Marburg in Slovenia, leaving friends believing he had drowned in the Mur River. At this point he suffered a nervous breakdown, and there would be further indications throughout his life that not all was well within that rather extraordinary mind—he is now believed to have suffered from obsessive-compulsive disorder (OCD). For the next four years, he worked for various electrical companies in Budapest, Strasbourg and Paris. It was in Budapest, working for the Central Telephone Exchange, that he conceived the idea of the alternating current (AC) induction motor. He would later claim that this invention came to him fully formed as he walked in the park: “In an instant the truth was revealed. I drew with a stick in the sand the diagram shown six years later [when he presented his idea to the American Institute of Electrical Engineers].” However, no one in Europe could be persuaded to invest in such an outlandish idea, given that at the time the world ran on direct current (DC). Tesla decided he had to go to America to work with Thomas Edison, the great electrical engineer who had a near-monopoly on New York City’s electrical power. Arriving in the USA with nothing but four cents in his pocket and a letter of Lent 2012
Lent 2012
Manuel Martin
introduction from Edison’s business associate Charles Batchelor, he was duly hired. However, Edison, wary of losing his DC-based electrical empire, tasked Tesla only with improving his DC power plant, and would not fund any work into AC. After a few months, Tesla announced his work on the power station was finished, and asked for the $50,000 payment Edison had promised him—only for the American to claim the offer had been a joke. Losing no time in walking out of Edison’s company, Tesla dug ditches to pay his way until A.K. Brown, of the Western Union Company, agreed to invest in the AC motor. From his New York laboratory, surrounded by power cables and electric trams running on Edison’s DC, Tesla put together the pieces he had seen in that moment of clarity in Budapest. From out of his mind came the first AC motors, now tangible, material and “exactly as I imagined them”. This was the starting point for an electrical revolution. What came next would change the world. In the winter of 1887 Tesla filed seven US patents relating to polyphase AC motors and power transmission. Put together they were the blueprints for a complete system of generators, transformers, transmission lines, motors and lighting that would prove to be the most valuable US patents since Alexander Graham Bell’s telephone. Polyphase power is the system of power transmission now used across the world; AC current enables the use of the transformer effect, essential for the high-voltage power needed for economically viable long-distance transmission. The scope of these innovations was not lost on the industrialist George Westinghouse, who saw that Tesla had made the missing link in the challenge of transmitting power over long distances. Westinghouse bought the seven patents that would change the world for $60,000, and a $2.50 royalty for Tesla per horsepower of electrical capacity the Westinghouse Corporation sold. The next challenge would be how to sell AC to anyone in the face of Thomas Edison’s high-powered propaganda war against this rival technology. The resulting smear campaign was the ugliest facet of the ‘War of Currents’. Edison’s desire to protect his technology and denounce AC as dangerous drove him to publically electrocute dogs, horses and an elephant with AC. In addition, he funded the invention of the first electric chair—powered, naturally, with AC. Power supply is a pragmatic business however, and the Westinghouse Corporation’s AC system was undeniably more efficient and practical than DC. In 1983 Chicago hosted the Columbian Exposition, the first ever all-electric fair. Westinghouse won the
Tesla’s Polyphase Alternating Current 500 horsepower generator, at the 1893 World Columbian Exposition in Chicago
bid to illuminate it, and AC so outclassed the DC technology that Tesla and Westinghouse’s expenses were half that of Edison’s failed bid. At the fair, a hundred thousand incandescent lights lit the buildings, including the Great Hall of Electricity, where the polyphase system was displayed to 27 million visitors. From then on over 80 per cent of the electrical devices ordered in the United States ran on Tesla’s system. The ‘War of Currents’ might have been over, but it would have lasting consequences. Both Edison and Tesla were nominated for Nobel Prizes in Physics, but neither won, and it has been suggested that their great animosity to each other and refusal to consider sharing the Prize, led the Nobel Committee to award it to neither of them. The rest of Tesla’s life might not match the high scientific drama of his rivalry with Edison, but he remained an active and productive mind. As he had in Budapest, he would develop his inventions mentally, seeing them fully-formed and in perfect detail, then simply recreate them in reality. Some ideas came to fruition; at his death Tesla held hundreds of patents, the last one filed at the age of 72. Some were ahead of their time, and some were simply bizarre. His concept of using radio-waves to detect the presence of ships, more than 20 years before the invention of radar, sits alongside his unfeasible claim to have invented a particle beam that could bring down aircraft from 250 miles; prime examples of both great prescience and extreme eccentricity. When Tesla died in 1943, over 2000 people attended his state funeral in New York. His impact is possibly best expressed in the words of the then Vice President of the Institute of Electrical Engineers: “Were we to seize and eliminate from our industrial world the result of Mr. Tesla’s work, the wheels of industry would cease to turn, our electric cars and trains would stop, our towns would be dark and our mills would be idle and dead. His name marks an epoch in the advance of electrical science.” Sarah Amis is a 2nd year undergraduate studying biological Natural Sciences The Eccentric Engineer11
DO
MIN
IC
Mc
KE
NZ
IE
Warning: Contains Peanuts
Mrinalini Dey investigates our attempts to alleviate the anxiety of allergy sufferers two decades, the number of cases of food allergies has risen sharply, with peanut allergy being one of the most common. For sufferers and their families, this can often mean a decreased quality of life, with constant anxiety over food and the threat of anaphylaxis (a severe, acute multi-system reaction). Despite increased awareness of the condition, it remains a significant public health issue, especially in schools. Here, children may be accidentally exposed to peanuts, present in a variety of foods. Unlike other common food allergies, such as allergies to cow’s milk, most cases of peanut allergy persist into adulthood. Due to the severity and increasing prevalence of peanut allergy, there has been a need for a reliable disease-modifying therapy. In March 2011, it was announced that this had been achieved through the world’s first peanut desensitisation programme, carried out on a group of children in Cambridge over the last three years. In the 1960s, the antibody immunoglobulin E (IgE) was discovered—a significant breakthrough in the study of the mechanisms of allergy. Allergic reactions are caused by allergens cross-linking preformed IgE molecules, which are bound to receptors on mast cells. Mast cells line body surfaces and alert the immune system to local infection. They
over the past
RAGGEDYLAND
Shopping for food can be an arduous task for peanut allergy sufferers
12 Warning: Contains Peanuts
induce inflammatory reactions through the secretion of chemical mediators and synthesis of signalling molecules like prostaglandin and leukotriene. In an allergic response, they provoke unpleasant reactions to mild antigens. There is usually an immediate and a late-phase response. The initial inflammatory response happens within seconds. This is followed by a late reaction that develops over 8 to12 hours, involving the recruitment of other effector cells such as lymphocytes. In the case of re-exposure to an allergen, the preformed mediators causing the immediate reaction (an increase in vascular permeability and smooth muscle contraction) are short-lived and their powerful effects are confined to the vicinity of the activated mast cell. The more sustained late-phase reaction is due to the synthesis and release of signalling molecules from the mast cells.This is also focussed on the initial activation site, and it is the anatomy of this site that determines how quickly the inflammation can be resolved. Thus, the clinical syndrome arising from an allergy depends on the amount of allergenspecific IgE present, the route of allergen entry and the dose of the allergen. Allergy symptoms can vary from a runny nose in hay fever (due to inhalation of pollen), through to the collapse of the circulatory system, which occurs in systemic anaphylaxis. This can give rise to a variety of fatal effects—widespread vascular permeability leads to a sharp decrease in blood pressure; airways constrict, causing breathing difficulties; the epiglottis swells, leading to suffocation. This fatal combination is anaphylactic shock, and can occur in peanut allergy sufferers on exposure to the allergen. Despite all the precautions taken by sufferers to avoid peanuts, there are still a significant number of accidental reactions each year, some of which are severe. The number of children admitted to hospital
Lent 2012
for general food-related anaphylaxis has increased by 700 per cent since 1990. Researchers have long been trying to find a safe and effective therapy for peanut allergy. Immunotherapy for stinging insect allergy by injection has proved effective, as has oral immunotherapy (OIT) for hen’s egg and cow’s milk allergy. It was this OIT method that was first investigated three years ago by doctors at Addenbrooke’s Hospital as a treatment for peanut allergy. The OIT approach distinguished it from similar previous, but unsuccessful, desensitisation programmes, since these had used peanut injections and not the more gentle oral doses. Four children allergic to peanuts and aged 9 to13 years were enrolled in the study, all with a history of eczema, a positive peanut skin prick test and positive peanut serum-specific IgE. They initially took a five milligram serving of peanut protein, then increased this dose over a period of six months, training their bodies to tolerate at least 800 milligrams of peanut protein per day—160 times the starting amount and equal to five whole peanuts. The subjects continued taking 800 milligrams of peanut protein each day for the following six weeks, after which three of them were able to tolerate the equivalent of 12 peanuts without reaction, and the fourth was able to tolerate 10. There was clearly an improvement in the tolerated dose in all four children, including one who had suffered from anaphylaxis on initially ingesting five milligrams of peanut flour. The OIT had enabled them to ingest at least 10 peanuts without adverse consequences, far more than they would be likely to encounter accidentally, the main cause of anxiety for peanut allergy sufferers.
Lent 2012
Following on from this, a further 18 children aged 4 to18 years were enrolled in the study and underwent OIT. The dosage was first gradually increased to 800 milligrams of peanut protein per day, then the highest tolerated dose (a target of 800 milligrams per day) was taken for 30 weeks. After treatment, 19 of the 22 children were able to eat at least five peanuts per day, with two eating two to three per day and the last dropping out at the start of the study. Additionally, the skin prick test response was much reduced at 6 and 30 weeks. Peanut-specific IgE levels showed a transient rise partway through OIT, followed by a reduction at 30 weeks compared with pre-OIT values. Overall, the amount of peanut that could be tolerated by the subjects increased 1,000-fold—a significant amount, especially for those who had previously reacted to just one milligram of peanut protein. Researchers are currently working to make this revolutionary treatment more widely available. This would be invaluable to children and adults with severe peanut allergy and low-dose threshold. The children studied represented various severities of allergy, some with a history of anaphylaxis. This peanut allergy investigation is the first to be successful, making it possible for patients to lead a more normal life. Considering the powerful effects of an allergic reaction, it is easy to see why this is such an important breakthrough: for the most severely-affected patients, their previously incurable allergy could soon become a thing of the past. Mrinalini Dey is a 2nd year undergraduate in the Department of Medicine
Warning: Contains Peanuts13
Aspects of Ageing Andrew Szopa-Comley explores possible explanations for why humans age at the molecular level KATHERINE WAKELY-MULRONEY
we all have a good idea of how ageing affects our bodies. But how much do we know about the underlying reasons for these changes? As we grow old, our bodies become increasingly unable to function normally. Typical features of advancing years are often accompanied by diminished ability to recover from illness and injury. Many characteristic age-related changes in our bodies are linked to deterioration in tissue function, especially in what is known as the regenerative potential of tissues. The symptoms of this decline can range from serious problems such as atherosclerosis, osteoporosis and a weakening of the immune response to more superficial signs of ageing like wrinkles. The prevalence of many diseases such as cancer also increases dramatically with age. To understand the science of ageing we have to get to grips with three features of the cells that comprise our tissues: their tendency to accrue damage, the consequences of the failure to repair this damage and the inherent limit in the number of times they can divide. Damage accumulates over the lifetime of a cell. One scientific theory of ageing links the build-up of cellular damage to the mitochondria—compartments within the cell where molecules derived from food are converted into a form of cellular energy. Some of the unfortunate by-products of this process are molecules known as reactive oxygen species (ROS), which carry unpaired electrons and so are highly chemically reactive. One theory of ageing is based on the idea that ROS are responsible for inflicting damage on the DNA and proteins in the cell. The cell uses several enzymes to mop up ROS, but these defence mechanisms are not perfect and are believed to deteriorate with age. Evidence in support of the theory comes from genetically modified mice. Mice which lack the genes for these protective enzymes have a significantly reduced life expectancy. However, findings in the nematode worm C. elegans
14 Aspects of Ageing
have shown that loss of one type of protective enzyme actually extends (or at least does not reduce) lifespan. Although appearing contradictory at first, it is believed these results indicate that loss of the enzyme may alter mitochondrial function, outweighing the increased susceptibility of the cell to ROS damage. The study highlights the fact we still have gaps in our knowledge of a very complex field. It seems ROS are likely to play a role in ageing, but they are by no means the whole story. Cellular damage is also caused by exposure to UV light and mutation-causing chemicals that can result in damage to DNA in the cell nucleus. This damage occurs through breaks in the double strands of the DNA helix and in changes to the individual bases making up the DNA comprising the genetic code. DNA encodes the proteins that are required for the cell to function properly, so damage here can have disastrous consequences. The cell is able to monitor DNA damage through the p53 protein. p53 is able to initiate several responses depending on the severity of the damage. In the event of a catastrophe where damage leaves the DNA beyond repair, p53 can execute the cascade of biochemical events eventually leading to the death of the cell. This is the cellular equivalent of pushing the self-destruct button. Extensively damaged DNA can lead to abnormal proteins being produced, which in some circumstances could allow the cell to divide uncontrollably, causing cancer. Therefore, the role of p53 is of such paramount importance that it has earned itself the nickname “guardian of genome�. p53 is often found to be mutated and unable to work properly in a high proportion of human cancers. In mice genetically engineered to have hyperactive p53, accelerated ageing is observed but cancer incidence drops. There is seemingly a fine balance between the anti-cancer effects of molecules such as p53 and the increased ageing that they may cause. Lent 2012
TOM ELLENBERGER
Lent 2012
European Bioinformatics Institute
Another answer to the mystery of ageing may lie within the enigmatic entities known as stem cells. Most cells have an inherent limit on the number of times they can divide. Stem cells are different and are able to replace themselves through a process of self-renewal. This can occur symmetrically (where two identical daughter stem cells are produced), or asymmetrically. Asymmetric self-renewal occurs when one stem cell is produced alongside another cell that has begun the journey along a pathway of differentiation. In other words, the cell starts to get more and more specialised, eventually becoming totally specialised in order to fulfil a particular role, from the long, fat-insulated cells of the nervous system to the white blood cells of the immune system. Stem cells are therefore vital for the replacement of cells that are unable to divide themselves. The capacity of our bodies to replenish tissues during our lifespan depends on a reservoir of tissuespecific stem cells. Researchers have proposed that the physiological decline in many organs with age could result from stem cell ageing. Ageing of the haematopoietic stem cells that give rise to all blood cell types is associated with a less effective immune system, as the white blood cells that guard us from pathogens are reduced in number. Similarly, in the brain, the decrease in the production of new neurons is linked to a reduction in the number of neural stem cells. But why should stem cells age? Apart from the gradual accumulation of DNA damage, attention has been focused on the possible role of specialised structures known as telomeres. These stretches of DNA, which can be up to hundreds of thousands of molecular units long, are situated at the end of each
chromosome; a single human’s genetic code is packed into 23 pairs of chromosomes. The significance of telomeres lies with the events that occur every time a cell divides. At each division the chromosomes must be duplicated to ensure that each of the daughter cells receives a complete set. Each time duplication occurs, a small part of the DNA at the end of each chromosome is lost, so the telomeres gradually shrink. After a certain number of cell divisions, the telomeres reach a critical length and the cell enters a non-dividing state. Upon any attempt to divide beyond this limit, the cell will undergo apoptosis (the ability of cells to self-destruct when something has gone wrong). The link between telomeres and ageing originated from the discovery that increasing the levels of an enzyme capable of extending telomere length (telomerase reverse transcriptase or TERT) in mice brought about an increase in their life expectancy. Removing both copies of the gene that encodes for the TERT enzyme in mice coincided with shortened lifespan, increased frailty and reduced organ function. Intriguingly, recent findings have indicated that some of the disparate theories of ageing are intimately linked. Mice lacking the TERT enzyme were shown to have impaired mitochondrial function and increased incidence of age-related disorders. It was discovered that the ill effects were mediated by p53. Many of the anti-cancer strategies utilised by cells depend on p53 and telomere shortening-induced cell death. There is growing evidence to suggest that the presence of these in-built anti-cancer mechanisms may be partly responsible for the ageing of stem cells. We are still a long way from understanding the fine details of ageing, let alone its fundamental causes. Research has uncovered only some of the connections between the many answers to the question of why we age. It is likely that many more will be discovered. One thing is for sure: with ever increasing life expectancies in the developed world, an understanding of this seemingly inevitable process is becoming ever more important.
An image of p53, a protein so important it has earnt the nickname: “guardian of the genome�
DNA repair mechanisms may become dysfunctional over time, leading to abnormal protein expression
Andrew Szopa-Comley is a a 3rd year undergraduate in the Department of Zoology
Aspects of Ageing15
Intelligence
22 Focus
Michaelmas 2011
FOCUS
BlueSci looks at the science of human intelligence: how do we test it, what controls it, and how do we even define it?
Koen Vereeken
what is intelligence? We all think we
Michaelmas 2011
know, yet this question continues to challenge psychologists, neuroscientists and sociologists in equal measure. To provide an answer, we might suggest that intelligence is related to the quantity that IQ tests measure. However, as we shall see, this simple suggestion throws up paradoxes and all sorts of other problems. Attempts to quantify intelligence began long before the birth of experimental psychology and the establishment of modern methods for probing the brain. Throughout the 19th century many scientists thought that Craniometry held the answer. They believed that that the shape and size of the brain could be used to measure intelligence, and so set about collecting and measuring hundreds of skulls. Paul Broca was a key proponent of this view and invented a number of measuring instruments and methods for estimating intelligence. A great deal of Broca’s work, and that of his contemporaries, was devoted to confirming racial stereotypes. When they were proved wrong they often blamed their scientific theories, rather than retracting their racist views. The precursor of our modern IQ tests was pioneered by French psychologist Alfred Binet, who only intended it to be used in education. Commissioned by the French government, the purpose of the Binet test was to identify students that may struggle in school so that special attention could be paid to their development. With some help from his colleague Theodore Simon, Binet made several early revisions to his test. The pair selected items that seemed to predict success at school without relying on concepts or facts that might actually be taught to the children. Then Binet had a spark of inspiration: he came up with the idea of standardising his scale. By comparing the score of individuals with the average abilities of children in particular age
groups, the Binet-Simon test could assign each child a mental age. Although Binet stressed the limitations of his scale for mass testing, his idea was soon taken up by those with more expansive testing ambitions. Lewis Terman, from Stanford University, published a further, refined version in 1916 and named it the Stanford-Binet Intelligence Scale, versions of which are still used today. This is now standardised across the whole population and its output was named the Intelligence Quotient according to William Stern’s suggestion. After a somewhat protracted delivery, IQ was born. Two of the most commonly used IQ tests are the revised Wechsler Intelligence Scale for Adults and the Ravens Matrices. Despite having very little in common in their procedure, there is a strong correlation between performances on these two tests. The fact that the variance in one test can therefore be used to explain a large degree of the variance in the other suggests that they are in part measuring a common quantity. Yet, after taking each test, an individual is given two different IQ scores. How, then, can any individual IQ test be used to derive a measure of a person’s intelligence? Charles Spearman thought that this difficulty could be overcome by accessing the underlying property that he believed was driving the correlations. Spearman postulated the existence of a single general intelligence factor, ‘g’, and attempted to uncover it by developing a statistical technique, called factor analysis, to compare correlations. Thus, within a group of IQ tests, the one that correlates best with all the others is taken to provide the best measure of ‘g’. The Ravens Matrices are usually considered the best correlate with ‘g’ and are therefore said to carry a “high ‘g’ loading”. Yet ‘g’ has been known to wander, with its precise numerical definition varying depending on which IQ tests are included in the group. In other words,
Focus 17
JIRAH
Example problem from the Ravens Matrices IQ test. What should go in the box on the bottom right?
intelligence is a quality defined by our conception of it. The so-called Flynn paradox throws up an even more serious challenge for ‘g’. Average IQ scores have crept gradually upwards over the course of history. However, backward projection implies an average turn-of-the-century IQ of around 60, making the majority of our ancestors ‘mentally retarded’ by today’s standards. This is clearly nonsensical and seriously hinders any attempt to link IQ score to genetics—the genetic makeup of individuals has surely not changed substantially over the course of a few generations. Additional backlash has come from those who prefer to focus on the differences rather than the correlations between different tests for IQ. Visual mapping of numerous tests according to the strength of their correlations reveals an interesting pattern. Rather than being dispersed evenly, the tests form clusters. Louis Thurstone used this phenomenon to
elembis
Diagram illustrating how a set of different IQ tests (shown as purple ovals) might relate to a general intelligence factor ‘g’. The test with the greatest overlap provides the best measure of ‘g’
18 Focus
suggest that maybe the thing we call intelligence is a composite of multiple different intelligences. Howard Gardner has taken this a step further by suggesting that the concept of intelligence should be extended to include a wide range of capabilities including musical, spatial and intrapersonal ability. The murky waters of intelligence testing are clouded further by controversies relating to differences in average performance between genders and social groups. It is known that a large number of items on intelligence tests are culturally loaded, such that certain members of society have an unfair advantage. However, many psychologists are still convinced that real differences between populations exist. Philip Kitcher has suggested that because it is such a controversial topic, research into gender or racial differences in IQ should not be undertaken. Yet, intelligence is such an important and interesting subject that few have heeded this suggestion. Even if our understanding is still far from perfect, we have come a long way from the days of relying on measurements of head size and brain weight. The enigma of intelligence is surely worth probing further. We are now entering the quagmire that is the genetics of intelligence; an area of study with an almost 150-year history that has been filled with conflict. The founding father of the heritability of intelligence was Sir Francis Galton, an alumnus of Trinity College, Cambridge, who made a formidable contribution to a wide variety of areas including genetics, statistics, meteorology and anthropology. Greatly influenced by his half cousin, Charles Darwin, who looked for variation between related species, Galton began studying intra-species variation and humans were his species of choice. By studying the obituaries of the prominent men of Europe in The Times, Galton hypothesised that “human mental abilities and personality traits, no less than the plant and animal traits described by Darwin, were essentially inherited”. Galton coined two of the most infamous phrases in the heritability of genetics debate: ‘nature versus nurture’ and ‘eugenics’. The word eugenics stems from the Greek for ‘good birth’ and Galton believed that in order to improve the human race those of high rank should be encouraged to should marry one another. The early part of the 20th century saw the most devastating effect of delving into the heritability of intelligence, as the eugenics movement grew and resulted in mass genocide. To this day, the field remains highly controversial, and the so-called ‘genius gene’ still remains elusive. Galton, on the other hand, spent a large portion of his life working on the ‘nature versus nurture’ Lent 2012
FOCUS
Lent 2012
London cab drivers were found to have enlarged their hippocampus by memorising the streets of London Neha Viswanathan
tests in a group of teenagers over a four year period. They saw growth in areas of the brain responsible for verbal communication or hand dexterity in teenagers whose test scores increased over the years. Even more amazingly, in those whose score either stayed the same or went down they saw no changes in these parts of the brain. This suggests that our genes are more of a starting line for our potential intellect, and that our intelligence can be drastically altered by our environment. Height provides an easy illustrative tool. Over the past century, we have seen a huge increase in average height. This is due to improvements in nutrition, not because everyone’s height genes suddenly kicked in at the same time. The same is true for intelligence and if one works hard and practices at something this environment will lead to improvements in intelligence. London cab drivers were part of a study which showed that by having to memorise the street names of London to get their licence they actually increased the size of the hippocampus, a part of the brain needed for memory. Where did we get this ability to improve our intelligence from? How did our particular species, Homo sapiens, become intelligent enough to develop technology that allows us to dominate the globe? Why, of all species, should it have been us that ended up so clever? Many of the cognitive abilities that we think of as ours, such as intelligence, are actually not unique to us at all but are shared with many other species. Crows and ravens are capable of solving complex problems; jays and squirrels can remember the location of thousands of food caches for months on end; even the humble octopus uses shells as tools. Yet the capabilities of these animals seem to be surpassed by those of chimpanzees and other great apes. They show insight learning, meaning that they can solve a novel problem on the first attempt without any trial and error; they use a wide variety of tools, including
An artists impression showing the location of the hippocampus within the brain
ANAtoMoGRAPHY©
argument, and, with the backing of his behavioural studies in twins, he came down strongly on the side of nature. Twin studies have become the standard method to estimate the heritability of IQ and consist of comparisons between twins that were separated at birth (adoption studies), or between genetically identical and genetically dissimilar twins raised together. The results are very clear, with identical twins being significantly more similar in terms of intelligence than their non-identical counterparts. This similarity persists regardless of whether or not the twins are raised together and has been reinforced again and again over many decades. Estimates of the heritability of IQ are between 0.5 and 0.8, implying that up to as much as 80 per cent of intelligence is predetermined in our genetic make-up. However, heritability is a complex issue, plagued by a number of common misconceptions. The observed IQ of an offspring is determined by the combined effects of genotype (genes held by offspring) and environmental factors. Twin studies state that for IQ, the genotype makes up the larger partner in this balance. However, our genotype is not just a combination of the genes we inherit from our parents, but also includes how those genes interact—like a cake is not just the combination of flour, eggs, butter and sugar but a whole new entity. This is true for all complex traits; and makes determining the causative genes very difficult. There have been a number of genome-wide association studies in humans to try to determine the genes which are responsible for intelligence but they have proved quite unfruitful. These studies rely on huge populations to find reproducible associations which aren’t due to chance. Furthermore, perhaps due to the controversial nature of the topic, studies have failed to receive the necessary funding compared to disease-related studies. In fact, the strongest genetic contenders have also been found in other cognitionrelated studies, such as APOE, a gene which is a risk factor for Alzheimer disease and COMT, which is implicated in schizophrenia. This backs up suggestions that these diseases are related to intelligence. Michael Meaney of McGill University, Montréal, argues for the other side of the heritability debate saying that, “there are no genetic factors that can be studied independently of the environment”. Meaney suggests that there is no gene which isn’t influenced by its environment and this is part of a new wave of thinking about heritability of IQ. A group in Oxford demonstrated the influence of the environment exquisitely in a recently published study. Using MRI imaging, they could see changes in specific parts of the brain which correlated with changes in scores on IQ
Focus 19
20 Focus
thomas lersch
Hernán De Angelis Campephilus
Accipiter
Crows, squirrels and chimpanzees all display some intelligent characteristics. So how is human intelligence different?
spears for hunting small mammals; and they engage in deception of others. One of Jane Goodall’s chimps, Beethoven, was able to use tactical deception to mate with females despite the alpha male, Wilkie, being present. Beethoven was able to provoke Wilkie through insubordination within the group and then, when Wilkie was occupied with reasserting his authority through dominance displays, Beethoven would sneak to the back of the group and mate with the females there. Many primatologists have claimed that this sort of deception lies at the heart of understanding human cognition because to be able to lie to someone you have to have a theory of mind. You have to be able to place yourself inside the mind of others and to understand that they are likely to react in the same way as you would in that situation. So can a chimp really put itself in another chimp’s shoes? We know very little about what it is that makes our brain so special. It certainly isn’t the largest in the animal kingdom: the sperm whale has a brain six times the size of ours. The highly intelligent corvid birds—crows, rooks, and jackdaws—have tiny brains compared to camels or walruses, two species not known for their cerebral feats. So if our brains are large for our body size but otherwise unremarkable, and many of our intelligent traits are shared with other members of the animal kingdom, then what could it have been that catapulted us into the position of being able to use our intelligence to dominate the world around us? Spoken language is one possibility. While we can teach apes to understand English to a certain extent, they can’t physically speak—the shape of their larynx and mouth doesn’t allow it—and animals that can speak, such as parrots, don’t necessarily have any actual understanding of what they’re saying. Being able to use spoken language is a wonderful adaptation for living in social groups; it enables any of the individuals in the group to effortlessly communicate with the others a complex concept or a new idea, and then the others can contribute their own ideas or improvements just as easily. Arguably language might even have given us the ability to ‘plug in’ to other
people’s brains, allowing the development of a ‘hivemind’, where the brainpower of many people could be pooled together. While the actual evolutionary story of intelligence might be lost in the mists of time, we may still wonder why it was us who became so brainy. Some have argued that we had to, to be able to cope with living in social groups much larger than those of chimpanzees. We needed intelligence to be able to maintain alliances, form coalitions, and engage in deception. Or could it have been for making love rather than war? Intelligence might have been an indicator of good genes and a healthy upbringing to a potential mate; after all, many cases of learning difficulties come about due to malnutrition or disease early in childhood. Human intelligence has evolved surprisingly quickly; it has been built up over millions of years from the already impressive cognitive abilities of our ape ancestors, and accelerated massively through our use of tools and language. Now, at the dawn of the 21st century, it seems likely that we will soon see a leap of the sort that we have never seen before: for better or for worse it will not be long until we can use electronic technology to ‘improve’ our brains as we wish. Many are already working towards creating intelligence. The idea of imbuing human capabilities to a machine has occupied artificial intelligence (AI) researchers for more than 60 years. The machines in question are generally computers, since we already know these can be programmed to simulate other machines. The best-known test we have for assessing computer intelligence is the Turing test, proposed in 1950 by Alan Turing. He suggested that if a machine could behave as intelligently as a human then it would earn the right to be considered as intelligent as a human. Whilst no computer has yet passed the Turing test, many AI researchers do not see it as an insurmountable task. Still, we might be uncomfortable about the idea that a computer could achieve human intelligence. Thus we return to the problem of how we should define intelligence. The philosopher John Searle proposed the ‘Chinese Lent 2012
shuffle and combine the fragments, followed by selection for the best-performing ‘offspring’ for a particular task. The program evolves to give better and better solutions. This method can generate unexpected solutions to problems, and do so as well as an experienced programmer. AI applications are all around us: Google Translate uses AI technology to translate complex documents; the Roomba autonomous vacuum cleaner can learn its way around a room; and anti-lock braking systems prevent skidding by interactively controlling the braking force on individual wheels. Most people, however, would not consider these systems ‘intelligent’. This is known as the AI effect—as soon as something becomes mainstream, it is not considered to be ‘true’ AI, despite having some intelligent properties. As a result, AI applications have become ubiquitous without us really noticing. Intelligence comes in many forms and trying to describe, examine and create it is proving to be more difficult than could have been imagined. Animals continue to surprise us with their mental abilities, eroding our ideas of what makes us special. Measuring our own intelligence has come a long way from measuring head size but we still cannot decide what constitutes intelligence. Nature versus nurture rears its ugly head once more, splitting opinions and creating controversy. Our efforts to create intelligence outside the realms of nature are still far from the apocalyptic Hollywood science fiction movies and are currently limited to a slightly more intelligent vacuum cleaner. However, with our understanding of our own minds constantly improving, perhaps it won’t be long before Isaac Asimov’s robots become a reality.
Lent 2012
The Roomba vacuum cleaner: too mainstream to be intelligent?
Helen Gaffney is a 3rd year undergraduate in the Department of History and Philosophy of Science Jessica Robinson is a 3rd year PhD student in the Department of Oncology Ted Pynegar is a 3rd year undergraduate in the Department of Zoology Liz Ing-Simmons is a 4th year undergraduate at the Cambridge Systems Biology Centre
teymur madjderey
room’ argument to challenge the standards implicit in the Turing Test. Imagine a man is locked in a room into which strings of Chinese symbols are sent. He can manipulate them according to a book of rules and produce another set of Chinese characters as the output. Such a system would pass a Turing test for understanding Chinese, but the man inside the room wouldn’t need to understand a word of Chinese. Considering that the only evidence we have for believing that anyone or anything is intelligent is our interactions with them, we might ask whether we really know if anyone is intelligent. The major challenge to the ‘Chinese room’ is that while the man inside does not understand Chinese, the system as a whole, including the man, the room, the lists of symbols and the instruction book does understand Chinese. After all, it is not the man by himself who answers the questions; it is the whole system. And if the system does not understand Chinese, despite being able to converse fluently in it, then who does? This goes back to the original question behind the Turing test: the only way we can judge other people’s intelligence is through our interactions with them. How can we judge a computer’s intelligence, if not in the same way? The ‘Chinese room’ problem focuses on only one aspect of intelligence: the ability to use and understand language. Creating programs which can understand natural language patterns is one of the major focuses in AI research today with such programs being used to interpret commands in voice control applications and in search engines to allow more intelligent searching. For example, intelligent internet searches look to find the answer to a question holistically, rather than just extracting keywords from it. However, there are many more aspects to intelligence than the use of language. We might suggest that things like the ability to apply logic, to learn from experience, to reason abstractly or to be creative are examples of intelligent behaviour. While the idea of trying to program a computer to be creative may sound implausible, one approach to this could to be to use genetic algorithms. Starting with random snippets of code, each generation undergoes a random mating process to
Larry D. Moore
FOCUS
Isaac Asimov wrote I, Robot, a science fiction novel where intelligent robots are commonplace
Focus 21
Capturing Change Tom Bishop discusses carbon dioxide capture as one solution to climate change “For my generation, coming of age at the height of the Cold War, fear of nuclear winter seemed the leading existential threat on the horizon. But the danger posed by war to all humanity—and to our planet—is at least matched by climate change.” — Ban Ki-Moon, UN Secretary General climate change is arguably mankind’s greatest
ever challenge. As governments, international organisations and industries struggle to come up with the least inconvenient solution to such a convoluted problem, the scale of the challenge continues to grow; we are still increasing our fossil fuel use by 2.5 per cent each year. The scientific consensus may not be clear on predictions for the future, but two things are indisputable: our climate is changing and we are at least in part responsible. More worrying is that we do not really know who, what, when or where will be affected, or how badly. The possibility of carbon dioxide capture and storage (CCS) was not recognised as early as other mitigation options, but it is now starting to receive some well-deserved attention. This umbrella term covers a vast array of physical processes, techniques and equipment. Essentially it involves either separating carbon dioxide (CO2) from various large ‘point sources’, for example power plants, or simply removing it directly from the air. The CO2 can then be transported to a storage location, where the aim is long-term isolation from the atmosphere. The first time CO2 was pumped underground was in the early 1970s, somewhat ironically by the oil industry, in a process called enhanced oil recovery. This involves injecting a gas that includes CO2—not air as it would catch fire—into an oil reservoir in order to force more ‘black gold’ up to the surface. This ‘CO2 flooding’ is still commonly used today, with over 120 registered sites worldwide.
However, CO2 capture was first considered by the Massachusetts Institute of Technology, which started a programme in 1989. This was quickly followed by commercial development, which was indirectly boosted by the Norwegian government’s new tax on CO2 emissions in 1991. But it was not until November 2002 that CCS really entered the global stage, when the Intergovernmental Panel on Climate Change (IPCC) decided to hold a workshop to assess the literature on the subject. Following this, at the 20th session of the IPCC in Paris in 2003, an outline and timetable were approved for CCS’s own Special Report. One of only seven, and the only one directly related to cutting emissions, the IPCC’s endorsement and subsequent preparation of this report shows the field’s high regard in scientific circles. However, it is still not getting the public attention or funding required to develop to its full potential. The current scientific consensus favours underground storage of CO2 in depleted oil and gas reservoirs or saline aquifers. Natural gas is already stored temporarily in such sites and CO2 has been naturally stored there over geological time—millions of years, not just the thousands needed to make CCS feasible. The IPCC also found that globally many large point sources of CO2 are within 300 kilometres of potential geological storage sites, making transportation by pipelines relatively feasible and cheap. Without CCS it will cost up to 70 per cent more to hold global temperatures at 2°C above pre-industrial levels.
22 Perspective
NASA/Goddard Space Flight Center Scientific Visualization
Night-time lights: 30 to 60 per cent of CO2 emissions from electricity generation could be suitable for CCS by 2050
Lent 2012
Lent 2012
Richard Croft
ingolfson
Nonetheless, some concerns remain. How much CO2 can be stored and how long for? The current lack of research on these issues, rather than acting as a barrier, should motivate further work. Observations from engineering models and geological analogues suggest that over 1000 years 99 per cent of CO2 can be retained in these storage sites. Below 750 metres it has been found that CO2 occupies a dense, supercritical state; so much so, in fact, that CO2 at 800 metres occupies just a tenth of the volume that it did at 250 metres. Despite this, the CO2 is still less dense than its surroundings so an impermeable ‘cap-rock’, overlying where the CO2 is injected, is also essential to prevent the gas simply rising back to the surface. Similar cap-rocks are found above oil reservoirs, so this is a potential avenue for collaboration with the oil industry. For a global project of this magnitude, partnerships like this are the key. Not only must governments come to agreements on how best to utilise and develop CCS, but the fossil fuel industry must be brought in too. Their technical expertise, business interests and financial backing make them vital. As well as their experience in pumping CO2 into reservoirs and underground imaging, they are also the most knowledgeable when it comes to leak prevention and risk management. The first operational CCS project was even started by Norway’s biggest oil company, Statoil, in 1996, in reaction to the Norwegian government’s implementation of a CO2 tax. Located in the North Sea, the Sleipner gas field produces natural gas with unusually high levels of CO2—around nine per cent, significantly higher than the 2.5 per cent required commercially. In order to avoid paying the $50 tax per tonne of CO2 released, and to explore this potentially lucrative technology, Statoil pumped all of the CO2 back deep underground. It has continued to do so until the present day, providing the longest case study for monitoring a storage reservoir. In the following year, two forces in industry combined and worked out a mutually beneficial scenario—CO2 produced from fossil fuels in North
Dakota, USA, was to be piped just over the border to the Weyburn oil field, Canada, where it could be used for enhanced oil recovery. Though geological formations currently provide the best storage options, other ideas have been suggested. The ocean is a possibility: either by injecting CO2 and allowing it to dissolve, or sending it via pipes to great depths where it is denser than water and would form a ‘lake’. However, such alteration of the chemical environment could have disastrous consequences for marine life. Both experimentally and through modelling, injection of CO2 has been shown to significantly increase the acidity in our oceans, which would kill many marine organisms. Other possible proposals include injecting CO2 into reactive rock formations where some of it could be taken up, or passing it over methane-bearing sediments—this has the added advantage of displacing methane, which can then be used as a gas power source. The primary limitation for all these scenarios has been, and will continue to be, cost. In the eyes of power companies, projected expenses of $25 to $150 per tonne of CO2 captured is not attractive. This increase comes largely from the energy required for capture, with transport and storage making up the rest. Many consumers may initially think that companies should bite the bullet, but inevitably such costs will be passed on to the consumer—full CO2 sequestration in the US would raise electricity bills by 50 to 100 per cent. As technology advances, the cost of capture will inevitably fall. Combined with relatively short transport distances and economic incentives like enhanced oil recovery and methane extraction, this could lead to commercial CO2 storage becoming a reality with little to no government incentives. In the meantime, though, such incentives are crucial in order to make CCS more commercially viable and hence bring the technology forward. CCS’s potential to mitigate the effects of climate change, and provide a stepping-stone to entirely renewable energy generation, means it cannot remain on the periphery any longer.
Pipelines are a viable option for the transport of CO2 to storage sites
Globally, power stations emit nearly 10 billion tons of carbon dioxide per year. This is roughly the mass of a one kilometre high mountain
Tom Bishop is a 4th year undergraduate in the Department of Earth Sciences Perspective 23
The Race to the Edge Beth Venus discusses the future of manned space missions november 2011 saw the quest to send humans to
Mars take a step forward with the launch of Russia’s Phobos-Grunt probe. The intention was not only to return soil samples from the surface of the planet, but to test whether living organisms could survive the journey in a biomodule on board. Together with the success of Mars500, in which six men were isolated in a mock spacecraft for seventeen months and Barack Obama announcing his ambitions to send humans into deep space, touchdown on Mars’ rust-coloured surface could be celebrated by mankind as soon as 2030. The inspiration of a new generation of scientists could be captured, and a greater understanding of the environment and evolution of Mars attained. The many problems associated with space travel and survival will need solutions before this can become a reality, but the research is already under way. 2014 will see the testing of NASA’s Orion spaceship, designed to carry humans beyond lowearth orbits. It is this craft that could get humans over the moon and out to asteroids, Phobos (Mars’ largest moon), and Mars itself—all potential destinations for space crews. Its first test—where Orion will be flung into an elliptical orbit taking it to an altitude higher than any human has previously reached—will be unmanned. Yet it will provide crucial knowledge on how to safely return humans home from a voyage. How Orion performs on re-entry into the earth’s atmosphere will guide us in designing a spacecraft to survive re-entry at greater speeds from further afield. All being well, the first manned attempt will occur within the next two decades. However, such a test
MKONAIR
A model of Russia’s PhobosGrunt probe
24 Science and Policy
relies on the outcome of debates currently occurring in the United States Congress about financing NASA. It is possible that any financial limitations to our exploratory power can be resolved by uniting globally. The European Space Agency (ESA) and NASA may be joining with Roscosmos, the Russian Federal Space Agency, to launch their ExoMars satellite to learn more about the atmosphere of Mars. This international effort to increase our knowledge of Mars using rovers and satellites gives hope that a similar effort will succeed in seeing the first people to the planet. This summer is a case in point, as representatives from the United States, South Korea, Europe, Japan and Canada gathered as part of the International Space Exploration Coordination Group. In their recently published Global Exploration Roadmap, the group laid out two technological routes that would advance plans to set foot on and eventually set up home on Mars, or elsewhere in the solar system. Either another moon-landing or grappling with an asteroid would sufficiently hone technologies for missions further afield. Their vision is one of exploring the richness beyond our horizon, but also that of ensuring mankind’s survival. Intercepting and redirecting a misguided asteroid would not only require advanced spacefaring technologies but is one way that we could avoid a repeat of the mass extinction events that have previously devastated the Earth. It would be too late to gain this know-how when a crisis is imminent; reaffirming Obama’s words that manned space exploration is not a dispensable luxury. Robotic precursors are already being used to solve the complexities of orbiting and landing on such a low-gravity body as an asteroid. In 2005, Japan’s Hayabusa sample return spacecraft orchestrated a soft landing on the 25143 Itokawa asteroid, the first of its kind. Soon to be refined by Hayabusa 2, progress in these types of landings improves the chances of humans accompanying robots in the near future. Similarly, landing on Phobos has been proposed as a precursor to landing on Mars. A manned lander headed for Mars needs to be capable of atmospheric entry and return to orbit; Phobos, however, is a low gravity body with no atmosphere, making landing less costly and already within our technological grasp. One pivotal advantage to this scenario is that stopping off on Phobos, so tantalisingly close to Mars, would make the prospect of humans finally reaching our neighbouring planet significantly Lent 2012
NASA
NASA
Mars (right) and its largest moon, Phobos (left), some of the first targets for further manned interstellar travel
more tangible. Not only that, but astrobiological research could be conducted on Mars, using rovers conveniently remote-operated from Phobos, without contaminating its surface with life from Earth. The question of life surviving the long journey to Mars was the subject of the Living Interplanetary Flight Experiment (LIFE) module aboard PhobosGrunt. Hardy organisms such as water bears and yeast, selected as missionaries to our neighbour, would have experienced life outside Earth’s magnetosphere, which protects our planet from the harmful solar wind. This would test the possibility of transpermia, the idea that life on Earth may have been propagated from the nearby universe via micro-organisms lodged snugly in asteroids. If microorganisms could be shown to withstand the duration of such a challenging mission, the hypothesis that life on Earth could have had an extraterrestrial origin will rise in stature. Moreover, if they can do it, so can we—learning how to protect humans from solar and cosmic radiation in space is one of the aims of a mission to an asteroid prior to Mars. In this spirit, the China National Space Administration will bring about the completion of its Kuafu satellites in 2012, facilitating space weather forecasting. Being able to forecast changes in conditions in the space dividing Earth and Mars, and thus protect crews accordingly, will make a manned mission more viable. Making these prospects yet more likely, the effects of living in space for extended periods of time are being analysed at the International Space Station. Whilst Mars500 tested what psychological effect long term isolation has, the impact of weightlessness and radiation exposure on astronauts inhabiting the space station will provide a basis on which to plan missions capable of returning a healthy crew. Another vital aspect of research at the ISS is into how spacecraft systems can be maintained in orbit, so that not only human but spacecraft health can be sustained for long-duration missions. Lent 2012
The moon may be an important stepping stone to deep space travel and Russia aims to found humanity’s first outpost there by the 2030s. Prime Minister Vladimir Putin announced the plan upon this year’s 50th anniversary of Yuri Gagarin becoming the first man into space. Such a move would coincide with the United States’ proposed Mars landing, if both ambitions prove to be on realistic timescales. Establishing a base on the moon would not only enable helium-3 mining as a potential energy source for continued life on Earth, but would also enhance our ability to live beyond our home planet. Interstellar travel and colonisation is an aspect of our imagination that may take a few centuries to turn into reality, but for now, the reach and focus of our robotic orbiters is extending to greater distances across the Solar System. As a joint venture between the United States and Europe, the Jupiter Europa Orbiter, intended for launch in 2020, will be seeking to uncover whether Europa, one of Jupiter’s largest moons, possesses a habitable sub-surface ocean. Supposing the results return positive, this will boost confidence in the belief that environments hospitable to life exist elsewhere in the Universe. However, developing interstellar space travel will rely on first executing manned interplanetary trips. For the approaching surface of Mars to finally fill the vision of human eyes, a range of manned and unmanned missions must succeed in discovering the essentials for a triumphant Mars landing. With a global effort we could expand our understanding of what it will take to achieve manned missions and determine how plausible it is to make the harsh environment of space accommodating to life. Travelling into the future, we may find ourselves turning to the horizon of the solar system as we once turned to the horizon of Earth. Unfortunately, Phobos-Grunt’s engines failed to fire after separating from the launch rocket, and at the time of production the spacecraft is stranded in Earth’s orbit. Recovery is being attempted.
Beth Venus is a 1st year undergraduate studying Natural Sciences Science and Policy 25
Computers, Codes and Cyanide Jordan Ramsey explores the persecuted genius of computing pioneer Alan Turing DESCRIBED AS THE father of computer science, a
JON CALLAS
Alan Turing memorial at Bletchley Park, where he helped crack the Enigma code
code breaker in World War II and a pioneer in the world of artificial intelligence, Alan Turing was a remarkable man. He was also a proud homosexual and a runner of Olympic calibre. His untimely death at the age of 41 leaves us wondering just what more he could have achieved. This year marks the centenary of Alan Turing’s birth; celebrations are planned at King’s College, Cambridge, and throughout the world. Turing was born in Maida Vale, London, in 1912. His father was a member of the Indian civil service. As a result, he and his brother John spent a good deal of their youth in foster homes while their parents returned to India. Like many other geniuses both before and after him, as a young boy Turing felt misunderstood and isolated. His loneliness only finally evaporated when he left home for Sherborne School and met Christopher Morcom. Alan idolised his new-found kindred spirit, finding reasons to frequent Morcom’s favourite libraries, exchanging letters and collaborating on scientific projects with him. Morcom’s death during his last year at Sherborne had a great impact on Turing, fuelling an ambition to fulfil his friend’s legacy. There is even speculation that Morcom’s death influenced Turing’s work on artificial intelligence as he analysed the relationship between the material and the spiritual. Morcom’s death solidified in Turing a determination to improve his grades and earn a scholarship to Cambridge. After two failed attempts to get in to Trinity College, Alan finally received a scholarship to his second choice, King’s College, in 1931. While reading for a degree in mathematics, Turing pursued rowing and later running, while also coming to grips with his sexuality. During this time, he completed impressive work on the central limit theorem in statistics and secured a fellowship at King’s in 1935. It was then that he turned his attention to a problem posed by a fellow mathematician, producing the Turing machine, which forms the foundation of our modern theory of computation. The Turing machine is a theoretical device able to solve any computable mathematical problem given the appropriate algorithm, or set of instructions. His extension of this concept to a ‘Universal Turing Machine’ arguably paved the way for the modern
26 Behind the Science
stored-program computer, in which program and input data share the same memory. Prior to this, computers were either constructed with a single hardwired program or reprogrammed with an external punched tape that had to be fed into the machine. After obtaining his PhD at Princeton University in 1938, Turing was recruited to work as a cryptanalyst for the Government Code and Cypher School at Bletchley Park. His eccentricities were noted by colleagues. In the summer, Turing would cycle to work wearing a gas mask to prevent hay fever. He also tied his teacup to the radiator to prevent his co-workers from stealing it. Despite his bizarre behaviour, Turing nonetheless emerged as a brilliant cryptanalyst. He devised plans to build a machine called the ‘bombe’, after the original Polish ‘bomba’ machines, to decipher messages from the German encryption machine, Enigma. The bombe was much faster than its predecessor and eventually led to the construction of the Colossus, another successful decoding machine. Towards the end of the war it is said that Churchill would read Hitler’s messages at lunch, while Hitler himself would read them later that day at dinner. Amidst all this, Turing still found time for romance and drama, befriending a fellow cryptanalyst named Joan Clarke, who soon became his fiancé. After they were engaged, however, Turing confessed to having ‘homosexual tendencies’; Clarke was reportedly unsurprised. Though Clarke was perfectly content to continue their engagement, the scrupulous Turing was eventually moved to call it off. It was after these tumultuous years that Turing was awarded the Order of the British Empire for his work at Bletchley Park, deemed by Churchill to be the single biggest contribution to the Allies’ victory. After the war, Turing’s work for the Government Communications Headquarters (GCHQ) continued, but the secrecy surrounding the war effort prevented him from disclosing this or any of his significant achievements at Bletchley Park. At the National Physical Laboratory in London, where he took a post, he quickly became discouraged and frustrated by his apparent lack of progress. Turing left before the realisation of his plans for an electronic storedprogram computer, the Automatic Computing Engine (ACE), which was later scaled down to a Pilot ACE. Turing obtained a Readership at the University of Manchester in 1948. Here he began his work on artificial intelligence, devising the Turing test that would provide the means to evaluate a machine’s intelligence. A modification of a popular parlour Lent 2012
Lent 2012
and private, were illegal under the 1885 Criminal Amendment Act. Despite this, Turing did indeed report the burglary and in the process of investigation was charged with “gross indecency with a man”. His security privileges as a cryptanalyst were revoked and a year-long regime of forced oestrogen therapy ensued, drastically altering his formerly athletic body. Despite this setback, Turing coped. Hormone therapy was stopped, and a second Readership was created for him at the University of Manchester in 1953. Turing’s friends and family were shocked to hear of his death on the 8th June 1954. His cleaner discovered him with an apple lying beside him, with several bites taken from it. The apple was never tested, but it is thought to have been dipped in cyanide. Most believe his suicide stemmed from his persecution as a homosexual. Some, including his mother, hold that his death was an accident, since he’d been experimenting with cyanide and was often careless with chemicals. Conspiracy theorists claim he was murdered to protect the secrets with which he had been entrusted as a cryptanalyst. Whatever the reason, Turing’s premature death robbed the world of one of its greatest minds. It wasn’t until 2009 that Prime Minister Gordon Brown made an official apology for the way Turing had been treated after the war.
MAGNUS MANSKE
game, the Turing test deems a machine intelligent if it can converse with a human without giving away its identity. Today the Turing test is implemented in the annual Loebner Prize competition. Turing’s prediction that by the year 2000 a machine would pass the Turing test 30 per cent of the time within five minutes of conversation was an ambitious goal, but the most successful program in the 2008 Loebner Prize competition was only one vote shy of this mark. Turing clearly found his work stressful, and as such he pursued running throughout his career. During the war, he often reached London by foot when summoned for meetings, running the 40 miles from Bletchley Park. Turing placed fifth in the 1948 Olympic marathon trials with a time of 2 hours and 46 minutes. This was only 11 minutes slower than the Olympic champion that year, a testament to his abilities and to the amount of stress he was under! Unfortunately, an injury prevented him from continuing to run competitively. Turing’s successful career was also rudely interrupted by a charge of gross indecency. Arnold Murray, a 19-year-old man he met outside a cinema in Manchester, became Turing’s lover in 1952. Soon after they began their affair, Murray and a friend robbed Turing’s home. His young lover felt secure that Turing would not report the theft since it would inevitably require him to admit his homosexuality to the police. Homosexual acts in the United Kingdom, in public
Rebuild of the ‘bombe’ designed by Turing at Bletchley Park
Jordan Ramsey is a PhD student in the Department of Chemical Engineering and Biotechnology
Behind the Science 27
Science’s Royal Beginnings Nicola Stead takes a look back at the origins of the Royal Society and its founding members
28 History
RITA GRREr
national portrait gallery
Christopher Wren (left) and Robert Hooke (right) two of the founding members of the society
the current global economic turmoil, and its resulting austerity measures, are increasingly putting pressure on scientists to improve the social impact and application of their research. Whilst this can turn into a bureaucratic exercise for modern scientists, it was this very same necessity and desire to apply science that was at the heart of the formation of the Royal Society—the world’s longest continuously running scientific society. One late November evening in 1660, against a backdrop of a country beginning to recover from great political divides caused by the civil wars, twelve eminent men gathered in London. Putting aside the differences in their pasts, these gentlemen met at Gresham College to hear a lecture given by the young astronomer, Christopher Wren. Many of those present were already members of the so-called ‘Invisible College’—a group of notable men including Wren, Robert Hooke, Robert Boyle and John Wilkins, who met in the gardens of Wadham College, Oxford. They used their meetings to perform experiments and discuss natural philosophy, or as we now call it, science, and they were very keen to expand their group. The gentlemen who met that November night were also ardent followers of the Baconian method of science. This method had recently emerged from Cambridge alumnus Francis Bacon’s Novum Organum, published in 1620. Bacon encouraged abandoning the Aristotelian method of science and suggested placing emphasis on cooperative research, using empirical methods to gain knowledge about the natural world. Driven by a key Baconian principle that ‘knowledge is power’ and a genuine desire to deliver applicable science, these gentlemen set out to formalise their ‘Invisible College’.
In signing their names in a ledger, they became the first fellows of the ‘Colledge for the Promoting of PhysicoMathematicall Experimentall Learning’ (sic). Effectively, they accelerated the movement of science away from the somewhat amateurish hobby of the aristocracy, performed in country manor house laboratories, to a more centralised, open and collaborative affair. The motto of the ‘Invisible College’, ‘Nullius in verba’, loosely translates to ‘Take nobody’s word for it’, and it reveals the Society’s desire to be at the frontline of science. Each week at their meetings at least one experiment would be performed, ranging from dissecting a dolphin to transfusing blood from a sheep to a human and looking at slides under the newly designed microscopes. They were also particularly intrigued by foreign lands. In those early days, Tenerife was a particularly popular discussion point, as its central peak was thought to be the tallest known to man. As a result, the Society looked to study the fluidity of air by designing experiments to be conducted on the island. They would establish, for example, whether sand in an hour glass flowed faster at the peak of the mountain or whether altitude affected the ability of birds to fly. In the light of modern times, their experiments seem very basic. However, at the time relatively little was known about our world, and everything was open to investigation. Unfortunately, their questions were left unanswered, as the expedition was never able to take place. Nonetheless, the society was never at a loss as to what to discuss. They were sent communications detailing novel discoveries from all over Europe. From Christopher Merrett they received observations outlining the practice of double fermentation, which is still used today in making champagne, whilst Antonie van Leeuwenhoek sent descriptions of microorganisms. During this time, founding member Henry Oldenburg established and financed the Society’s journal, The Philosophical Transactions of the Royal Society of London, to publish results and correspondence. It is still in press today and is the oldest scientific journal, first being published in March 1665. The Society gained its royal status and name in 1662 with a Royal Charter granted by the newly restored King Charles II. The King, whose annual income was only £1.2 million, was very attracted by the Society’s promise of monetary remuneration from the application of science. In return, the charter would grant the Society many benefits, including the right to free press without Lent 2012
Burlington House, the home of the Royal Society from 1857 until 1968
Honbicot
censure. Under the King, many of the Society’s projects indeed proved of practical worth to the country. John Evelyn wrote about the correct management of forests to maintain steady supply of timber for the building of naval ships. Similarly, in a project for the navy, Hooke penned his famous law describing the properties of springs, whilst trying to make a watch that would measure longitude. Fellows of the Royal Society were also key players in the rebuilding of the capital after the Great Fire of London in 1666. Wren, for example, designed St. Paul’s Cathedral as it still stands today. Much of the Society’s work was truly inspired, but it was often not as applicable as the King would have liked. The society was often subject to ridicule in satirical plays and novels, such as in Jonathan Swift’s famous story Gulliver’s Travels, which mocked the Society’s fascination with foreign lands. Throughout its history, the Society has made errors in judgment. In fact, Hooke objected to a letter from Isaac Newton about Newton’s observations of light passing through a prism. Further disservice was caused to Newton when the Society passed up on publishing his groundbreaking Principia Mathematica in favour of Francis Willughby’s De Historia Piscium, which turned out to be a flop and a great economic burden to the Society. Luckily another fellow, Edmund Halley, of Halley Comet fame, would fund the publication of Newton’s work, and Newton would eventually put aside his resentment and become president of the Society. One truly remarkable and defining trait of the Society, which was evident from the beginning, was its extraordinary ability to overcome obstacles caused by huge differences in backgrounds. Hooke was the son of a priest and as such was an impoverished academic, whilst Boyle was the wealthy son of the Earl of Cork; Wilkins was a stout supporter of Cromwell, whilst Sir Robert
Moray was a royal courtier. Nationality, too, was no impediment in gaining fellowship—Oldenburg was a German national, and later Benjamin Franklin, an American, would also become a member. Additionally, traditional academic achievement was not a requirement for recognition by the Society; van Leeuwenhoek was a Dutch tradesman who spoke no Latin or English. Work was also accepted from the clergy, such as Reverend Thomas Bayes, who produced Bayes’ Theorem, which deals with probability and is used in supercomputers today. Today the Society has about 1500 fellows, including 100 foreign fellows. Whilst most fellows now have doctorates, the Society still recognises ‘non-scientists’—in 2010 a group of primary school children published an article on bumblebees in one of the Society’s journals. It would thus seem that the aims set out by those twelve men over 350 years ago is still very much alive in our modern day society. Nicola Stead is a PhD student at the Babraham Institute
References Features The Need for Sex - Ridley, M. The Red Queen: Sex and the evolution of Human Nature, London: Penguin (1994) Neglecting Vets - Zinsstag, J. et al, A model of animal-human brucellosis transmission in Mongolia. Preventive Veterinary Medicine. 2005 Vol 69 (1), 77-95 The Eccentric Engineer - www.tesla-museum.org/meni_en.htm Warning: Contains Peanuts - Clark, A. T., Islam, S., King, Y., Deighton, J., Anagnostou, K., Ewan, P. W. (2009). Successful oral tolerance induction in severe peanut allergy. Allergy 2009: 64: 1218–1220 Aspects of Ageing - Sahin, E. and DePinho, Linking functional decline in telomeres, mitochondria and stem cells during ageing. R.A. Nature 464, 520-528. (2010)
Regulars Capturing Change - www.ipcc.ch/publications_and_data/publications_and_data_reports.shtml#2 The Race to the Edge - www.scientificamerican.com/article.cfm?id=obama-space-plan Science’s Royal Beginnings - www.royalsociety.org Computers, Codes and Cyanide - Turing, Sara , Alan M. Turing, Cambridge: W. Heffer & Sons Ltd. (1959) Writing the Future - H.G. Wells, When then Sleeper Wakes, London: Collins (1920)
Lent 2012
History 29
Writing the Future Matthew Dunstan investigates the role of science fiction in shaping science fact the american philosopher and psychologist John Dewey once wrote, “Every great advance in science has issued from a new audacity of imagination”. Creativity within science and technology is not limited to scientists — we are used to seeing fantastic and seemingly impossible inventions in literature, films, art and theatre. Are these inventions confined to the realm of science fiction or, in hindsight, are they surprisingly accurate predictions? Consider something as simple as the automatic door. It was first invented by Lee Hewitt and Dee Horton in 1954 and is now commonplace in offices, shopping centres and even some public toilets. However, the idea was imagined first by H.G. Wells in 1899. In his novel When the Sleeper Wakes he writes, “...A long strip of this apparently solid wall rolled up with a snap…”. Admittedly, the leap from door to automatic door isn’t that impressive, and Wells gives no description about how this device actually works, but his forward thinking is still quite incredible considering he wrote this more than 50 years beforehand. As a better example, let us take a description for a device to image objects using electromagnetic waves: “A pulsating polarized ether wave, if directed on a metal object can be reflected in the same manner as a light ray is reflected from a bright surface… waves would be sent over a large area. Sooner or later these waves would strike a space flyer…and these rays would be reflected back”. This accurate explanation of the concept behind RADAR was taken from Hugo Gernsbacks’ novel Ralph 124C 4+. Published in 1911, Gernsback illustrated the principle of the RADAR years before the scientist Nikola Tesla described the concept. While the prediction is surprisingly accurate, it still contains errors based on incorrect scientific
understanding when it was written, in this case in its reference to ether, a substance thought at the time to permeate all space and allow the propagation of waves such as sound and light. The late Steve Jobs was an individual hailed for his creativity and ingenuity in developing many new electronic devices for Apple. Nevertheless, even he would have to concede that he didn’t always get there first. What about the white earbud headphones that debuted with iPods? In the 1950s Ray Bradbury wrote in Fahrenheit 451, “And in her ears the little seashells, the thimble radios tamped tight, and an electronic ocean of sound … coming in.” Even the invention of something as recent as the iPad with its 21st century design and sleek finish was foretold in literature. In 2001: A Space Odyssey, written 40 years before the first Apple announcement, Arthur C. Clarke writes, “When he tired of official reports and memoranda and minutes, he would plug in his foolscap-size newspad into the ship’s information circuit.... One by one he would conjure up the world’s major electronic papers”. While all of these examples might lead you to believe that scientists should be reading science fiction novels to inform their latest research, we should remember that for every startling prediction that turns out to be close to reality, there are a dozen ideas that remain a distant possibility. Although, considering what has been thought of already, we may well have our personal robot companion and jetpack someday. One thing is clear: as scientists, it is important we never forget the power of imagination. Matthew Dunstan is a 1st year PhD student in the Department of Chemistry
STROLLERS
ipods and RADAR are just two of the many inventions pre-empted by literature
30 Arts and Science
Lent 2012
The Science Diplomat Ian Le Guillou interviews David Clary
David Clary
group of science advisers and provide advice to the Foreign Secretary, ministers and officials on science, technology and innovation.
david clary is a Professor of theoretical chemistry
and President of Magdalen College, Oxford. In 2009 he was appointed the first Chief Scientific Adviser to the Foreign and Commonwealth Office (FCO). In an interview with Ian Le Guillou he discusses what this role entails and how science is involved in foreign policy decision-making. The FCO was one of the last major government departments to appoint a Chief Scientific Adviser (CSA). Why did it eventually decide it needed one? There are several departments in the FCO that deal increasingly with matters linked to science, so having a Chief Scientific Adviser was a useful step to take. There’s climate change and the science behind that, which has to keep in communication all the time. There are new renewable energies coming forward and the debate about whether to have nuclear or not, to have wind or not, the importance of shale gas and so on. Counter-proliferation and counter-terrorism are obviously of great interest; I can’t say much about that for confidential reasons but it really is a major concern of the FCO. Also there are territories that are still administered by the FCO and there are major scientific aspects of several of these. What is your role within the FCO? Some government departments have a very large budget and staff for science but not the Foreign Office, where the emphasis is more on influence with other countries. A very important part of the work that I do is to build partnerships to encourage collaboration between UK science and overseas science. I also work to promote UK public science and the FCO’s role in that, and to strengthen the science and engineering capacity in the FCO. In addition, I engage with the cross-government
Lent 2012
Is there much collaboration between the CSAs in government? There is the scientific advisory group for emergencies, on which many of the CSAs sit. This seems to have to meet every Easter. About three years ago we had swine flu and we had to meet then. Then two years ago we had the volcanic ash, and just last year we had the Fukushima disaster. That was, I think, a very good example of getting the CSAs and many other important scientists together very quickly to give advice to the government. What drew you to apply for this position? As a scientist I have worked in many countries and am a great believer in international collaboration. This lets you do better science and also improves relations between countries—what we call ‘science diplomacy’. So the job in the FCO allows me to be a science diplomat! How did a career in theoretical chemistry prepare you for this role? Well it has given me very broad scientific interests—all the way from astrophysics to atmospheric science to biomolecules. So that is quite a good background for the very broad reach of scientific topics relevant in the FCO, which covers space, the deep sea and even Antarctica. What has been the highlight of your experience with the FCO? I got the previous Foreign Secretary David Miliband to give a speech at the Royal Society where he compared the revolution in science in the 20th century to the major changes in foreign affairs over the same period— from certainty to complexity. How will science shape our foreign policy over the next decade? Science and its link to innovation is a crucial part of growth and prosperity and most businesses depend critically on trade overseas. Thus science and innovation will have an increasing relevance for foreign policy as we aim to restore financial security to this country. Ian Le Guillou is a 3rd year PhD student in the Department of Biochemistry A Day in the Life 31
Weird and Wonderful A selection of the wackiest research in the world of science The brain-burning smoke alarm
illustrations: www.alexhahnillustrator.com
scientists from japan have discovered how to wake the hearing impaired in the event of a fire, and in doing so have won themselves the 2011 Ig Nobel prize. The ingenious “Odor Generation Alarm and Method for Informing Unusual Situation” is a standard fire alarm with a difference. Instead of relying on sound as an alert when a hazard is detected, it sprays an unpleasant scent. In the design of their alarm, the inventors made use of wasabi, a relative of the horseradish. Commonly served with sushi, the condiment is given its pungency by the chemical allyl isothiocyanate. The scientists discovered that this odourous chemical can stimulate pain receptors in the nasal passages. In fact, this stimulation occurs to such an extent that inhalation of the optimal concentration of airborne wasabi can wake a sleeping person in under two minutes. Because smell perception changes during sleep, scent alone is not enough to rouse a person, but the brain-burning sensation produced by wasabi is, since pain receptors continue to function normally during sleep. The researchers suggest that the alarm could be further developed into an alarm clock or even a doorbell. nl
It’s a wrap! ever wondered what happens after death? For Alan Billis, a taxi driver from Torquay, the answer was on-air mummification in a Channel 4 documentary entitled “Mummifying Alan: Egypt’s Last Secret”. Alan was mummified in the style of 18th dynasty Egyptian pharaohs using techniques developed by the archaeological chemist Stephen Buckley. Buckley’s team began by making an incision in Billis’s left side and removing all internal organs but the heart, which was thought to be the seat of intelligence and wisdom by the ancient Egyptians. They then packed his body cavity with linen bags and covered him with sesame oil, beeswax, and resin. This mixture protected his skin while he was immersed for five weeks in a bath of concentrated Natron salt. After drying his body, the team wrapped Billis in linens containing family mementos—important tokens for his journey into the afterlife.
32 Weird and Wonderful
CT scans showed that Billis’s body was wellpreserved 93 days after starting the mummification process. His body will remain ‘entombed’ for further scientific observation. The team’s success makes Billis the first person to be mummified in this way in 3500 years. Described as “shocking” and “not an easy watch” by reviewers, the documentary sparked considerable controversy. jr
Could you live on caffeine?
university of iowa scientists have identified bacteria that can live on caffeine. One, known as Pseudomonas putida CBB5, was even found lolling in a flowerbed on the University campus. Caffeine is found naturally in more than 60 different plants. Its molecular structure features three clusters of carbon and hydrogen atoms, known as methyl groups, enabling caffeine to resist degradation by other bacteria. Human liver enzymes, which have the task of breaking down caffeine and other drugs, can only get part of the way. This bacterium uses four newly identified digestive enzymes to chop down caffeine into a carbon dioxide and ammonia molecule. Through this process, the bacterium harvests energy, achieving things with caffeine that humans are unable to. Ryan Summers, the doctoral researcher who led the study, said no previous research has located caffeine consumption in any other microbe species. He and his collaborators also noted that this finding could someday have implications outside of the highly caffeinated Petri dish. The bacterial digestive enzymes could be used to develop new medications to treat heart arrhythmias or asthma, or to boost blood flow. They could also be used in large scale processes to help break down excess caffeine, which is often generated as a by-product of decaffeinated coffee and tea processing. mf
Lent 2012
See where you’re going. Be where it’s at. Elevating viewpoints. Expanding horizons. Exceeding expectations. Graduate leadership careers and selected internships in Retail and Business Banking, Human Resources, Marketing & Products, Marketing Analytics, Credit Risk Analytics, Credit Risk Delivery & Information Management, Technology – Product & Process Development, Finance and Tax. Prepare for a future at the top of the financial services world. Visit seemore-bemore.com to learn more. @barclaysgrads
Barclays Graduates
Write for We need writers of news, feature articles and reviews for our website. For more information, visit
www.bluesci.co.uk
Interested in attending a science talk this term? Go to www.bluesci.co.uk and click on EVENTS for information
3rd February 2012
email complete articles or ideas to submissions@bluesci.co.uk For their generous contributions, BlueSci would like to thank the Centre for Science and Policy School of Clinical Medicine If your institution would like to support BlueSci, please contact enquiries@bluesci.co.uk
Ca
-6 48 17 SN IS
69 20 00
92
0
20
>
mb
rid
ge
Un
ive rs
ity
scie
77 17 48
Feature articles for the magazine can be on any scientific topic and should be aimed at a wide audience. Deadline for the next issue is
9
BlueSci Radio continues this term on Cam FM 97.2
nce
ma
gazi
Th
ne
eC am scie br nce idge mag Uni azin vers e fro ity m
ww
Lent Issue20
w.b lues
ci.co .uk
F
O Life CU in th S eU n
ive rs
e Ein Te ste st T in’s ube Life Bab . Sc ie ien s . Sp ce of ace E Sign leva ifica tor nce . M . T usi rian c T gula hera tio py no f In d
ia
11 20