BlueSci Issue 36 - Easter 2016

Page 1

Easter 2016 Issue 36 www.bluesci.org

Cambridge University science magazine

FOCUS

Artificial Intelligence

Brain sex . The Holographic Principle Poetry . Predatory Publishing . Tea



Easter 2016 Issue 36

Contents

Cambridge University science magazine

Regulars

Features 6

Does your brain have a sex?

8

In Search of Quantum Gravity

On The Cover News Reviews

Julia Gottwald shows us that human male and female brains are more similar than we think

Can Your Experiences Change Your Children? Jiali Gao looks at what toad sex, a suicide and starvation have taught genetics

12

Engineering the rise of cell therapies

Oran Maguire explains how engineering and cell biology are carving out a new field 14

Tumbling into Wonderland

Mirlinda Ademi scrutinises the syndrome that simulates Wonderland

16

FOCUS

Artificial Intelligence: the power of the neuron

Alex Bates looks at how neurobiology has inspired the rise of artificial intelligence

BlueSci was established in 2004 to provide a student forum for science communication. As the longest running science magazine in Cambridge, BlueSci publishes the best science writing from across the University each term. We combine high quality writing with stunning images to provide fascinating yet accessible science to everyone. But BlueSci does not stop there. At www.bluesci.org, we have extra articles, regular news stories, podcasts and science films to inform and entertain between print issues. Produced entirely by members of the University, the diversity of expertise and talent combine to produce a unique science experience.

Science and History

22

Perspectives

24

Science and Policy

26

Science and Art

28 28

Initiatives

29 30

Pavilion

30

Weird and Wonderful

32

Sophie Protheroe examines the global history of tea and its effect on our health

Gianamar Giovannetti-Singh explores the holographic universe 10

3 4 5

Katherine Dudman introduces genetic discrimination, the sly cousin of racism and sexism

Harry Lloyd ponders our duty to think ahead of technological progress Robin Lamboll crosses the battlelines of science and poetry

Michelle Cooper and Priyanka Iyer shine a light on predatory publishing Jack Hopkins programmes an AI poet, we use an AI artist and Andy Cheng offers some toilet humour

Jenny Easley explains how to avoid being eaten

President: Alexander Bates ���������������������������������������������������president@bluesci.co.uk Managing Editor: Pooja Shetye................................managing-editor@bluesci.co.uk Secretary: Sophie Protheroe ������������������������������������������������ enquiries@bluesci.co.uk Treasurer: Zoë Carter ���������������������������������������������������� membership@bluesci.co.uk Film Editor: Nelly Morgulchik �������������������������������������������������������� film@bluesci.co.uk Radio: Simon Moore......................................................................radio@bluesci.co.uk News Editor: Janina Änderbar �����������������������������������������������������news@bluesci.co.uk Web Editor: Simon Hoyte.................................................web-editor@bluesci.co.uk Webmaster: Andrew Ying..................................................webmaster@bluesci.co.uk Social Secretary: Jenny Easley...................................social-secretary@bluesci.co.uk

Contents

1


Thinking Science Issue 36: Easter 2016 Issue Editor: Alexander Bates Managing Editor: Pooja Shetye Second Editors: Janina Ander, Lauren Broadfield, Claire King, Robin Lamboll, Laia Serratosa, Aran Shaunak, Smiha Shikh, Kimberley Wiggins, Abigail Wood, Eliza Wolfson Copy Editor: Dora Lopresto News Editor: Janina Ander News Team: Zoë Carter, Anaid Diaz, Raghd Rostom Reviews: Simon Hoyte, Nelly Morgulchik, Abigail Wood Features Writers: Mirlinda Ademi, Alexander Bates, Jiali Gao, Gianamar Giovannetti-Singh, Julia Gottwald, Oran Maguire Regulars Writers: Andy Cheng, Michelle Cooper, Katherine Dudman, Jack Hopkins, Priyanka Iyer, Robin Lamboll, Harry Lloyd, Sophie Protheroe Weird and Wonderful: Jenny Easley Production Team: Janina Ander, Alexander Bates, Robin Lamboll, Sophie Protheroe, Pooja Shetye, Eliza Wolfson Illustrators: Alex Hahn, Oran Maguire, Eliza Wolfson Advertister: Julie Skeet Cover Image: Justina Tim Yeung ISSN 1748-6920

This work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported License (unless marked by a ©, in which case the copyright remains with the original rights holder). To view a copy of this license, visit http://creativecommons. org/licenses/by-nc-nd/3.0/ or send a letter to Creative Commons, 444 Castro Street, Suite 900, Mountain View, California, 94041, USA. Additionally, film stills and book covers are used under fair use and not the Creative Commons Attribution.

2

Editorial

What is it that I have done to bring together this issue of BlueSci, that could not one day be outsourced to an artificial intelligence? Magazine editors find stories, commission articles, edit copy, source images, berate people, get berated and eventually die. An artificial intelligence could do all of those things without doing some of the less useful things human editors must. It would not become fatigued, stressed or ill. It would not have to navigate silly social straights or placate a parent because it is too busy to call. It would not lose time to sleep, become senile or eventually die. True, with Microsoft’s intelligent twitter-bot Tay going AWOL recently and proclaiming the Holocaust a lie, perhaps the first such issue of BlueSci will be not be very kosher, but artificial magazine creation is already possible. In fact, we took advantage of software that uses principles developed in the artificial intelligence field to generate artwork for this issue. A few months back Google’s Deep Dream was plastered all over the Internet for its remarkable ability to tease animals forms out of any image it was fed, iteratively employing a brain-inspired mathematical structure known as a ‘neural network’. Indeed, the animal features in the photograph above are not native to my face. This technology has driven the creation of web applications such as Dream Scope, which can cast images users give it into the style of a famous artist, ‘paint’ it as a watercolour or piece it together as a mosaic. A good example of this can be found in Jack Hopkins’ Pavilion Piece, in which he presents how he developed an artificial poet and its fledgling poems. Julia Gottwald writes on a different science of the brain in her piece on sexual dimorphism in the human brain, and why it may not equate to a cognitive distinctiveness. Mirlinda Ademi gets into the thoughts that impose themselves on someone with Alice in Wonderland Syndrome. Katherine Dudman examines in her piece of discrimination how the way in which we think about genetics will impact on the way we think about people in general, while Jiali Gao looks at how behaviours, modes of thinking, can be transmitted to offspring epigenetically. Oran Maguire mentions in passing how tissue engineering could revolutionise manmachine interfaces, and Harry Lloyd argues that we are thinking behind the curve and that in order to cope with breakneck breakthroughs we need to prethink our potential policies. Breaking from the brain we also try to think about the holographic universe and the way in which this theory strings heavy hitters in physics together and how insects out-think their predators, and we chat history with Sophie Protheroe over tea. Our writers also think about the process of science itself and gaze into the maws of predatory journals. Lastly, we take a break from thinking and enjoy the toilet humour to be found in academic publishing. These budding journalists might soon live in a world where they could be commissioned, or supplanted, by artificial intelligence. But they are more than journalists; they are scientists. Artificial intelligence is a science that aspires to think for itself. Once it truly does, we have to wonder whether we can make artificial intelligence a science that can achieve science. Surely an artificial scientist capable of unimagined discovery would overcome the final bastion of human usefulness in our increasingly silicon world. Alex Bates Issue 36 Editor

Easter 2016

P


On the Cover   Neural circuits are difficult things to see. In a nervous system neurons are smooshed together but despite their squishy confusion they somehow manage to form spread out, gelatinous networks. The biological blueprints of these networks are now being translated out of the ‘wetware’ of brains and into computers’ software and hardware in the quest for better artificial intelligences. Though it is hard for the biologist to peer into the brain with their electrodes and microscopes, and though the brain is difficult for computer scientists to re-build, skilled artists like Justina Yeung can beautifully highlight key concepts behind the science being pursued. Justina is a masters student who was inspired by working on the nematode, Caenorhabditis elegans, in the Poole laboratory at University College London. Her image depicts adult male (blue) and hermaphrodite (pink) neural networks in the nematode head; the whole animals wriggle nearby. The tiny nematode nervous system is the only whole nervous system to have been completely mapped. However, twenty-nine years later the Barrios and Poole groups identified a previously unknown ‘mystery male cell’, highlighted by Justina here in red. This new cell was found to override the males’ desire for food in favour of seeking sex. This new discovery is evidence for the hardwiring of behavioural differences between the sexes, but as Julia Gottwald argues persuasively in this issue, we cannot extrpolate work in worms too far when we consider the neurological differences between humans.

Pipetting 360˚ The Integrated Solution for Optimal Pipetting Liquid handling is a central element of lab work. It demands very precise and accurate pipettes, high quality contamination-free tips, regular servicing and maintenance by professionally trained technicians and continuous improvement programmes.

Call Us Today for the Highest Quality Pipettes, Pipette Tips and Pipette Service T 01582 455135

F 0116 234 6715

E orders@anachem.co.uk

www.anachem.co.uk


UCL

ZEISS MICROSCOPY

News

Check out www.bluesci.org or @BlueSci on Twitter for regular science news and updates

IAN GOODFELLOW

4

News

Food changes how genes behave

Graphene interacts safely with neurons

A person’s metabolism, the chemical reactions that break down nutrients and generate cellular components, is affected by their genes. Now, a study published in Nature Microbiology by Cambridge scientists and carried out in yeast shows that, conversely, the nutrients available to cells can change the levels of almost all gene products. The team altered the nutrients available to yeast cells and observed the cellular response - at the level of gene products, but also amounts of proteins and other metabolic products. Furthermore, to study the interaction between metabolic changes and genetic differences, the experiments were done on yeast strains differing in their genetic information. It was found that metabolism has a far greater impact on the genome than previously thought, with nine out of ten gene products showing altered levels in response to an altered metabolic state. A lead researcher explains in a press release, “In many cases the effects [of changes in metabolism] were so strong that changing a cell’s metabolic profile could make some of its genes behave in a completely different manner.” This research was the first systematic investigation of these effects, and the scientists believe it will find wide-ranging application. Cancer cells often have altered metabolic states compared to normal cells and respond to drugs in unpredicted ways. In future, the new findings may for example facilitate insights into why cancer cells behave this way and how to design drug therapies to fight them in a more targeted manner. rr

Graphene, a planar sheet of carbon atoms arranged in a hexagonal lattice, may become the material of choice for brain electrode implants, devices used to treat conditions such as Parkinson’s disease. Researchers from the Universities of Cambridge and Trieste published findings in ACS Nano showing that graphene can form a suitable electrical interface with neurons, whilst also allowing the cells to maintain their physiological properties. Electrodes for implantation in the nervous system are most commonly silicon or tungsten microwires. A frequent problem with these is scar tissue forming around the electrode in response to insertion-related brain trauma and inflammation, which can reduce the signal strength reaching the neuron by up to 50%. Graphene was identified as a potential material for neural interfaces due to its excellent electrical conductivity and mechanical properties allowing it to be moulded into complex shapes. However, past studies did not specifically address graphene’s compatibility with living neural cells. In this study, neuronal cell cultures were grown on top of graphenebased substrates. The researchers compaired these cells to control cells not grown on graphene. They found no significant differences in growth, the number of synaptic connections formed or the cells’ spontaneous synaptic activity, indicating that graphene does not affect neuronal physiology and could be safely used in the brain. These findings are a step towards an exciting future in neurology. For example, graphene may be used in devices for deep brain stimulation, to treat motor disorders and to create interfaces to control robotic arms for amputees. zc

New facility improves Sierra Leone’s ability to respond to future epidemics Do you remember the summer of 2014? On the 14th of August that year, the World Health Organisation declared the Ebola outbreak in West Africa. An epidemic that likely started with one human case in December 2013, it had infected over 2000 people and caused more than 1000 deaths by September 2014. Amid disturbing media coverage, scientists and health experts argued that the response to the epidemic was too slow and the worst was yet to come. Among them was Professor Ian Goodfellow, a virologist from the University of Cambridge, who suspected that the epidemic could be contained if the necessary infrastructure and expertise were available. Motivated by this idea, Prof. Goodfellow volunteered to move to Makeni, Sierra Leone. He soon helped to set up a temporary facility for diagnosing Ebola patients in a tent. Today, with the help of Prof. Goodfellow and many others, the local expertise and that temporary tent have evolved. On 22 January this year, the inauguration of the Infectious

Diseases Research Laboratory at the University of Makeni (UNIMAK) was announced. This research centre is run in collaboration with the University of Cambridge and funded by the Wellcome Trust and UNIMAK. It will provide a world-class environment for training local scientists and the infrastructure for monitoring a range of infectious diseases. The development of this story within the bigger picture of the aftermath of the Ebola epidemic brings hope for the future. As of January 2016, the epidemic is no longer out of control, although it is not over, says the WHO. The same month saw two new Ebola cases in Sierra Leone. In a press release Dr Jeremy Farrar, Director of the Wellcome Trust, said: “The recently confirmed case of Ebola in Sierra Leone serves as poignant reminder of the need to remain vigilant, and the new facilities in Makeni are already playing an important role in this”. Interntaional alterness must continue. ad

Easter 2016


Reviews Life on the Edge - Jim Al-Khalili and Johnjoe McFadden The age when biology can afford to be separate from physics and mathematics is long behind. “Life on the Edge: The Coming Age of Quantum Biology” by Johnjoe McFadden and Jim AlKhalili is a compilation of most important recent discoveries – or at times, hypotheses – which will introduce the wonders of quantum biology without requiring preliminary knowledge of quantum mechanics. The quantum world will explain the mysteries of bird navigation using quantum entanglement, explain enzymatic activity with quantum tunneling, and the origin of life through self-replication and quantum superposition. Accessibly written, the book is breath-taking to theoretical physicists, eager to observe how quantum effects influence life, ground-breaking to biologists, with its fresh view on the living world, and enthralling to any curious mind. nm Bantam Press, 2014

Nature’s Building Blocks: An A-Z Guide To The Elements - John Emsley

Oxford University Press, 2011

Did you know that our planet’s crust can form natural nuclear reactors, or that helium-neon gas lasers scan your supermarket barcodes? These are just drops in the wide ocean of fascinating facts exposed in Emsley’s excellent Nature’s Building Blocks. Using carefully collected data, the author unearths the secret chemistry of the physical world. The book is organised into 115 equal chapters dedicated to each element from Actinium to Zirconium. Then each chapter is arranged into topics, including such highlights as ‘Cosmic Element’ which describes the astral origins of each element, ‘Medical Element’ which describes the use of the substance in historical medicine, and a final fun ‘Element of Surprise’. Whilst the book has dated slightly since it was published in 2003, the majority of facts still reflect the most current research. Personally, I have read each chapter several times: this is a book that rewards repeated digestion with a legion of exciting facts and a better understanding of the entire universe. The book would sit equally well on any beside table as on an academic library shelf: I would recommend it to everyone, from laypeople to professors, and will keep my own eyes eagerly peeled for future editions. aw

Faith vs. Fact – Jerry A. Coyne

Penguin, 2015

Easter 2016

Jerry Coyne's latest book is a bold move that argues that religion is causing harm to modern day society. Coyne's second book attempts to robustly counter the claim that religion and science can coexist happily. The difference between this slating of theology and that illustrated in Dawkins’ God Delusion is that here Coyne roots his argument in contemporary problems. Alternative therapies, resistance to vaccination, opposition to assisted dying, and the criminalisation of homosexuality are all cited as reasons to throw religion out of the window. Most interestingly, using polls conducted in the US, the book demonstrates the clear relationship between increasing religious faith and denialism of the Big Bang, evolution, the geological age of the Earth, and, most importantly, climate change. “While denial of evolution doesn’t pose an immediate danger to the planet” he argues, “denial of global warming does”. This highlights his view that religion cannot be simply excused as an irrelevant and out-dated philosophy, but that it is a “recipe for disaster”. Through exploring the link between societal dysfunction and religiosity, Coyne arrives at the harmonious conclusion that tackling inequality may well be the best way to convert the masses to atheism. Visit www.bluesci.org online to read our exclusive interview with Prof. Coyne. sh

Reviews

5


KOUKIKS

Does Your Brain Have A Sex?

The different placement of a single brain cell in female compared to male flies changes whether or not they are attracted to a male odour

Julia Gottwald shows us that human male and female brains are more similar than we think

If you were a fruit fly and smelled male pheromones, you would show a strong and consistent response. As a female fly, you would engage in courtship behaviour; as a male fly, you would become more aggressive. We know that pheromones activate different clusters of neurons in the brain of the male compared to the female fly. The differences do not end here. A collaboration between three research groups led to the discovery of fruitless, the master gene controlling the male fruit fly’s courtship ritual. When disabled, male flies don’t mate. In contrast, when the gene is activated in females, they show male courtship behaviour such as chasing other females. But you are not a fruit fly. The study of sex differences in the human brain is more complex, more controversial, and more emotionally laden than in any other species. This hot topic is frequently misrepresented in the media. Studies on sex differences are often oversimplified and taken out of context. Some articles have claimed that we now know why “men are so obsessed with sex”, although the original study focused on worms. This style of reporting promotes stereotypes and misconception of science. The truth is that the brains of men and women have a lot in common. The Royal Society has recently released a special issue on sex differences in the brain. It features an opinion piece which argues that human brains do not fall into the two distinct categories of male and female. The piece is partly based on a study from last

year: a revolutionary analysis on some 1,400 human brains. The authors looked at the volume of brain regions and the connections between them to select the areas that differed most between the sexes. For each area, the researchers then designated the upper and lower ends of the spectrum as either “male” or “female”, according to where men or women were more prevalent. If brains truly fell into two distinct categories, we would see brains which had either all “male” or all “female” areas. The study revealed that such consistent brains are indeed rare. Our brains are more like a patchwork quilt, with most people having a mixture of features that are “typical” male, female, or common in both sexes. Biology alone cannot explain why our brains are such a colourful mixture; we also need to consider the environment. How stressed was your mother during pregnancy? Did you grow up with close friends? How often did you exercise? All these factors will influence the development of your brain and consequently its appearance today. Even as an adult, your daily experiences shape your neuroanatomy. Your environment and behaviour leave a mark on the brain; it is not solely shaped by biological sex. Moreover, sex is associated with gender – the personal and societal perception of your sex. Gender encompasses all the expectations, biases, and norms of behaviour, which differ for males and females. It is this combination of genes and environment that determines our brain structure.

PIXABAY

Sex differences in the brain are far more complex than we originally imagined

6

Does Your Brain Have A Sex?

Easter 2016


ORAN MAGUIRE

Simply put, our brains are not completely “male” or “female” they are more like a patchwork of both

PUBLIC DOMAIN, DREAMSCOPE ADAPTED DREAM SCOPE

Despite the patchwork structure of our brains, there seem to be neuroanatomical differences between the average man and woman. But do these differences necessarily cause different behaviours? Actions such as mating, navigating London, or writing an essay are controlled by complex networks. The underlying anatomy is important, but so are other internal and external influences, such as stress, hunger, or exhaustion. Our behaviour is modulated by many pathways. Geert de Vries, director of the Neuroscience Institute at Georgia State University, has another take on sex differences in the brain. He argues that these variations cannot produce but instead prevent differences in behaviour. According to de Vries, men and women differ dramatically in their physiology and hormones; having different brains might be a way of compensating for these differences. Do male and female brains develop differently in order to promote similar behaviour? We do not know if these structural differences really are compensatory. However, this concept is not new and we can observe such compensations on other levels. For example, female mammals have two copies of the X chromosome in their cells, while males only receive one copy. If all chromosomes were equally active, women would make twice as many gene products from their X chromosomes as men. To prevent this, female mammals silence one of their X chromosomes, a process known as X-inactivation. A similar process might happen with brain structures but on a more complex level.

Easter 2016

So our brains are not distinctly male or female, and structural differences do not necessarily cause behavioural differences. Then why study sex differences at all? There are five times more studies with all-male than all-female animals in neuroscience and pharmacology, whereas only one in four studies includes animals of both sexes. Hormonal fluctuations in females were seen as an unwelcome confounding factor, and sex differences were often thought to be irrelevant for the research question. However, results from males do not always apply to females. Some drugs such as aspirin are taken up or cleared away differently in men and women. Sex is also important for some diseases: multiple sclerosis is more common in women, as are depression and anorexia. On the other hand, autism and some addictions are more common in males. Clearly it is not sufficient to investigate and address these questions by using subjects of only one sex. How can we expect to get the whole picture by looking at only one half of the population? Since 1993, the inclusion of women has been a requirement in clinical trials funded by the National Institute of Health in the USA. Since 2014, all animal studies funded by them also have to include females. Moreover, many scientific journals now ask authors to publish the numbers of males and females included in their sample. Steps such as these are necessary to learn more about how sex and gender influence our development and eventually our brains. The findings need to be analysed and communicated carefully. Men and women might be different in subtle ways, but our similarities probably outweigh the differences. A small change in your complex anatomy would not usually reverse your behaviour – after all, you’re not a fruit fly! Julia Gottwald is a 3rd year PhD student at the Department of Psychiatry.

Does Your Brain Have A Sex? 7


In Search of Quantum Gravity Gianamar Giovannetti-Singh explores the holographic universe Modern fundamental physics consists of two major pillars; general relativity, describing the interactions between matter and spacetime at the largest scales imaginable, and quantum mechanics, the physics governing the behaviour of subatomic particles. Despite each respective theory being tested to an extraordinary degree of accuracy, they are fundamentally incompatible with each other – general relativity predicts continuous spacetime as the fabric of the universe whereas quantum physics deals exclusively with discrete, quantised units of matter and energy.

WLASHBROOK, DREAMSCOPE ADAPTED

The holographic principle posits that gravity is born from the vibration of ‘strings’ in a ‘flatter’ cosmos of fewer dimensions

Theoretical physicists generally consider quantum gravity, a successful amalgamation of the two theories, to be the holy grail of modern physics, as it would unify these two immiscible faces of the universe into a so-called ‘theory of everything’, capable of predicting the behaviour of every structure in the universe within a single mathematical framework. Notable attempts at formulating a quantum theory of gravity include string theory, in which all the point-particles of the standard model of particle physics are replaced with one-dimensional strings, and loop quantum gravity, which breaks up spacetime, the fabric of 8 In Search of Quantum Gravity

our universe, into discrete chunks with a size of approximately 10-35 metres – if a hydrogen atom were the size of the sun, this would be the size of a single proton. Unfortunately, all of these attempts have proved to be far less successful than expected, offering no real falsifiable hypotheses. However, an alternative and rather radical approach to quantum gravity has recently been gaining increasing recognition in the physics community following a sudden inflow of supportive evidence from computer simulations; this approach is known as the holographic principle. This principle argues that rather than living in a three-dimensional universe which we perceive every day, the cosmos is actually a projection of three-dimensional information onto a two-dimensional surface – not dissimilar from the way in which a hologram encodes information about three-dimensions on a 2D surface. The holographic principle was suggested by Gerard ‘t Hooft as a consequence of Stephen Hawking’s and Jacob Bekenstein’s ground breaking work on black hole thermodynamics in 1974, which demonstrated that the entropy (a measure of disorder) of a black hole varies not with its volume as one might naïvely expect, but rather with its surface area. As entropy is intrinsically linked to information, ‘t Hooft proposed that the information of a 3D object is simply encoded on a 2D surface. The great American physicist and science communicator Leonard Susskind developed ‘t Hooft’s idea and formalised it in the mathematical framework of string theory, allowing it to be fully integrated within another physical model. In 1997 Juan Maldacena used Susskind’s mathematical framework to derive a tremendously profound result – if the “laws” of physics which followed from the holographic principle (namely that information about an N-dimensional system can be encoded on an (N-1)-dimensional surface without losing any knowledge of the system) are applied to a physical theory which describes how Easter 2016


RUSS SEIDEL

Easter 2016

for quantum gravity as it has demonstrated a universal correspondence between a theory derived from general relativity and a lower-dimensional quantum theory, and so far provides the only quantitative link between the two regimes. Whilst models such as string theory and loop quantum gravity attempt to quantitatively describe both gravitational and quantum behaviour, neither provide any numerical values which can be falsified, whereas Hyakutake’s work did just that, and thus allow the holographic principle to present (somewhat more falsifiable) hypotheses regarding the internal energy of a black hole. Whilst the nature of a theory of quantum gravity remains elusive to physicists at present, the holographic principle may just be the dark horse in the race for grand unified theory of nature.

The holographic principle suggests that the entire universe can be seen in two-dimensions from the event horizon of a black hole

Gianamar Giovannetti-Singh is a PhD student in the Department of Physics.

ORAN MAGUIRE

gravity would behave in a 5D universe, the results agree completely with a quantum-based theory (Yang-Mills theory) in four dimensions. This was an initial clue that the holographic principle held great power in its ability to unify seemingly completely detached areas of physics, and in its combination of theories derived from general relativity and quantum mechanics, the principle secured its status as a candidate for a theory of quantum gravity. A major piece of evidence in favour of the holographic principle was obtained in late 2013 by a team led by the Japanese theoretical physicist Yoshifume Hyakutake, who ran two very large simulations on a supercomputer calculating the internal energy of a black hole using two completely different physical models; one was a ten-dimensional universe based on general relativity, the other was a one-dimensional universe without gravity but rich in large-scale quantum effects. Somewhat astoundingly, Hyakutake and coworkers found that the two values agreed exactly, which lends credence to the holographic principle; i.e. that a gravitational model of spacetime corresponds directly with a lowerdimensional quantum-dominated universe. This simulation has significantly improved the standing of the holographic principle as a candidate

In Search of Qunatum Gravity

9


ORAN MAGUIRE

From top to bottom: Paul Kammerer, JeanBaptiste Lamarck and Charles Darwin. Public domain.

10

Epigenetics

Can Your Experiences Change Your Children? Jiali Gao looks at what toad sex, a suicide and starvation have taught genetics

The theory of evolution is up there with the ‘universal law of gravitation’ and the ‘theory of general relativity’ when it comes to popular science. Darwin and Lamarck’s famous historical showdown is ensconced in GCSE science textbooks, with Darwin emerging as the heroic victor, whilst Lamarck’s work is relegated to back benches. At first glance, it appears that the debate is settled: evolution occurs through natural selection rather than the inheritance of acquired characteristics. Or so we are told. It is perhaps not surprising that the matter is a lot more complex than the textbooks portray. Only decades after Darwin’s death, the Austrian scientist Paul Kammerer produced evidence seemingly in support of Lamarckian inheritance. By forcing midwife toads, which usually mate on land to instead do so in water, he bred toads that appeared to have black ‘nuptial pads’ on their feet in relatively few generations. Since these spiked swellings helped males to grasp onto females during mating, their heritable acquisition seemed to be of obvious advantage. It was a discovery that would have turned the evolutionary theory on its head, if it were not for the accusations that emerged soon afterwards suggesting that the nuptial pads had been faked with black ink. Kammerer’s subsequent suicide, interpreted by many as an admission of guilt, seemed to settle this. However, in 2009 fresh controversy once again emerged when Alexander Vargas argued that Kammerer’s results may in fact have been authentic, explainable by our modern understanding of epigenetics. Was Kammerer the tragically wronged ‘father of epigenetics’ or was the moral of his story just another lesson against scientific misconduct? Perhaps we shall never know. Whether or not Kammerer’s experiments were genuine, there is increasing evidence of extra-genetic mechanisms for inheritance that could have important implications in refining the theory of evolution. This emerging field is known as epigenetics - the study of modifications to genetic material that affect the expression of genes, but not the DNA sequence itself. In other words these are changes to the way in which our DNA code is read, which can be achieved through three major mechanisms. Firstly, chemical groups

can be added to the DNA molecule to form tags. For example, adding a methyl group (a carbon and three hydrogen atoms) commonly ‘silences’ the gene. Secondly, the packaging of proteins named histones and protamines can be modified to affect the accessibility of different sections of DNA. After all, DNA is a two metre long molecule, so must be carefully wrapped around these proteins in order to fit into cells generally only around ten micrometres in diameter. Finally, RNA molecules that do not code for protein can be produced. When we talk about RNA, we tend to refer to a molecule that carries instructions from DNA to ribosomes (the factories of protein synthesis), but this is in fact only one type of RNA: messenger RNA. RNAs can also have regulatory functions, such as affecting the extent to which genes are expressed. Importantly, changes in these epigenetic modifications occur much more frequently than genetic mutations. They are also less stable and have fewer known repair mechanisms, providing a dynamic system of control that fluctuates throughout life and provides an interface between the environment and our genetics. With the explosion of research in this field in recent years, effects of epigenetics have been discovered that reach into all aspects of human life, including health and disease. Epigenetic changes control the differential expression of genes that we inherit from our mother and our father, known as genetic imprinting. Environmental factors such as diet have been reported to affect DNA methylation patterns. So-called ‘epimutations’ have been implicated in several cancers, to give but a few examples. However, the transmission of epigenetic markers across generations is much less well understood. A landmark study involved the ‘Dutch Hunger Winter’ of 1944-45, when a German blockade exacerbated food shortages in a country which had already been devastated by four long years of war. Women who were pregnant in their third trimester during this famine were more likely to have children at risk of diabetes and grandchildren who were obese. Interestingly, this was because epigenetic imprints were established on the reproductive cells during this stage of pregnancy and so the effects were more dramatic on the second generation than the first. The theory is Easter 2016


all-nighters of course - but it’s a scary thought.) These behavioural studies are also supported by evidence on a molecular level. Changes in DNA methylation patterns within the egg and sperm have been found in mice exposed to traumatic stress, and these have been traced through many generations. How this occurs is an area of active research, especially since it is generally thought that most epigenetic marks are reprogrammed after fertilisation. Epigenetics is a dynamic field still shrouded in much mystery. There are yet a number of unanswered questions – for example, we don’t know how the effects we see in mice might translate to humans, or indeed if the epigenetic modifications we have identified are truly responsible for the effects observed. However, understanding epigenetic mechanisms could be invaluable to understanding the development of diseases and applying this knowledge to design new treatments. On the other hand, acquired epigenetic modifications can be adaptive, providing a survival advantage that is benefited by natural selection. Perhaps then, we are moving forward from the 20th century era of genetic determinants of evolution towards a new unified theory of evolution: one that reconciles the historical views of Lamarck with those of Darwin, allowing us to integrate genetic and environmental factors in a more accurate understanding of disease causation. Jiali Gao is a 2nd year undergraduate studying medicine at Selwyn College.

ORAN MAGUIRE

?

that these changes would have improved the wellbeing and survival of the child if its environmental conditions had remained constant, but a post-war period of plentiful food meant that this epigenetic reprogramming actually became maladaptive and increased susceptibility to disease. Perhaps more shockingly, some of these effects seemed to carry over into the grandchildren of those affected mothers, although data for this is still lacking since the average age of this generation is still relatively low for the study of adult-onset diseases. Admittedly this is an extreme example, but there is also evidence that acquired behaviours can be passed down epigenetically. We know that our early experiences can shape our psychological makeup, but is it possible that some of these effects could be passed on to our children? So far most evidence in favour of this comes from animal models, but is nevertheless surprising. In mice, long-term social instability in adolescence can alter social interaction across as many as three generations. Some mice were seemingly asymptomatic, but nonetheless transmitted the effect to their offspring. Similarly, traumatic experiences in newborn mice can lead to depressive behaviours in later life, again transmitted down up to three generations. Comparable evidence exists for the ability to cope with stress, addictive behaviours (which are known to run in human families) and cognitive functions. The implications for human diseases are profound - just imagine if all of those essay crises could be affecting your future children! (This is purely fanciful, since inheritance is likely determined by the duration and severity of the experience - not to trivialise those

Easter 2016

Epigenetics

11


Engineering the rise of cell therapies

Oran Maguire explains how engineering and cell biology are carving out a new field

CAMBRIDGE ENGINEERING

This polymeric scaffold electrospun by in the Oyen group in the Cambridge Engineering Department mimics the structure of blood vessels and cartilage, providing plenty of nooks and crannies for cells to sequester themselves inside

One of the most exciting fields to emerge in the life sciences and biotechnology in recent years is tissue engineering, which centers around creating a reliable supply of functional tissue that avoids rejection by potential transplantees. To realise these clinical goals, tissue engineering aims to take the best that cellular therapy techniques can deliver and apply these biotechnologies to a physical structure that enables cells to leapfrog the ordinary course of developmental anatomy: supporting, moulding, and inducing their development into a functioning tissue. Here, we have a symbiotic relationship between two scientific disciplines. New tissue for patients would be inconceivable without the recent and continued improvement of immature cell lines that can grow into the mature cells that make up tissues. Equally, we can only hope to see engineered tissues grafted into patients if there is progress in tissue scaffold technology. This is because we cannot just add a soup of cells to fill up a wound; physical structure is essential. Such will be the result of progress in the production, formulation, and shaping of biomaterials and artificial polymers, chains of repeated molecules; all with an eye toward improving the interactions of delicate stem cells with the network of molecular ‘cement’ that provides support for all cells in the body: the ‘extracellular matrices’ (or ECMs). Many scaffold types have been described in the literature, all in various stages of development. Much of the interest in artificial scaffolds has been generated by the considerable success of natural scaffolds. Indeed, the decellularisation of donor tissues into biological scaffolds, on which to grow immunologically 12

Engineering the rise of cell therapies

compatible cells, is a major subdiscipline in its own right. In contrast, the developers of artificial scaffolds seek the best way to mimic the natural ECM, and to kickstart healthy cell development and interactions under the influence of chemical signals. As will become clear, emulating natural ECM is tough, but recent decellularisation work seems to indicate that the effort could pay off. The transplantation in 2010 of a decellularised donor trachea into a 9-year old patient at Great Ormond Street Hospital, London, has been validated as a successful procedure in numerous followup studies. It demonstrated that appropriate stem cells could indeed penetrate donor ECM, and moreover, stem cells can be induced to take on an appropriate tissue role, in response both to the structure in which they found themselves, and even to the bodily environment, as the cells’ development was partly attributed to the air that passes by that particular organ. A fascinating study published recently by the RIKEN Center for Developmental Biology in Japan, in which complex, functional skin organs were developed from stem cells, demonstrates the power of such approaches. Although decellularisation may become an excellent alternative for patients at high risk of organ rejection, it would not be appropriate in all cases. Supplies of donor ECM remain limited; and the material may prove less appropriate for the production of more complex organs, in which mechanical influences such as passing air cannot effectively govern cell differentiation. It is thought that artificial scaffolds, exuding ‘molecular cue cards’ called morphogens, will assign localised cells specific roles in the womb-like conditions of a bioprocessor, and ultimately achieve better tissue patterning. Regardless, progress in decellularisation techniques will continue to inform the development of artificial tissues and their scaffolds. The ECM in living tissue is built and maintained by the cells it holds, so it comes in many shapes and chemical compositions. What all static tissues have in common is a web of collagen and hyaluronic acid fibers, of less than a micron in diameter. Artificial morphogenreleasing fibrous may hold the key to arranging and instructing cells – but how could these be made? Easter 2016


Easter 2016

There are many possible ways submicrofibers can be arranged into tissue scaffolds Their nanostructure and chemical signals matter greatly Signals can be encapsulated in and released from fibers, or polymer nanoparticles ORAN MAGUIRE

fruitful, albeit slow. Over the past two years, there has been a shift in attention from big organ scaffolds to minimal scaffolds with interesting invasive and immunosuppressant properties, to prevent attack from the transplantee’s white blood cells. A very recent study, announced at the beginning of this year by a team in MIT, showed how drugged polymer encapsulation could enable the infusion of humanderived pancreatic cells into mice with minimal use of immunosuppressants. This proved extraordinarily effective in tackling a mouse model of diabetes. Nanofibrous tissue scaffolds have played a crucial, albeit more oblique role in other exciting developments. For example, the Key Laboratory for the Biological Effects of Nanomaterials in Beijing, who used polymer nanofibers as the template for grooves which guide neuronal growth in straight lines: possibly over electrical appliances, and possibly as an excellent neural-electrical interface in prosthetics. All of these biotechnologies will inevitably rely on the concurrent development of cell, gene therapy and transfection methods, so that they become medically acceptable, more commercially viable, and safer. Without these advances, a lack of decellularised donor organs may ultimately prove to be a major impediment to medicine and research. This is why tissue scaffold technology deserves continued attention. PAOLA ANTONELLI

A number of construction methods have been trialled, and although the possibility remains open, attempts to recreate ECM with 3D printing have faced difficulties. This is because the printed depositions tend to overheat and damage all but the toughest signalling molecules; and because the nanoscale texture of a printed scaffold’s surface can be too rough for cells to form attachments. A more promising way to form polymer nanofibers below micrometre scales is through various ‘electrohydrodynamic’ techniques. These employ electrostatic forces to stretch, dry, and even align artificial or natural polymers. Among these, electrospinning is especially versatile. It encapsulates molecules, and turn fluidic solutions into structural fibers by extruding polymer solutions through an electrically-charged needle. By extruding one fluid inside the other, it can encapsulate morphogen-filled cores wrapped in biodegradable polymer shells, which ensure slow, variable rates of release. The electrospinning technique may well have the potential to be developed into a viable method for building tissue scaffolds. Such scaffolds would most likely comprise both purely structural fibers and nonstructural morphogen-releasing fibers. There are many challenges to overcome. The key weakness of these electrospun fibers in tissue engineering is their tendency to form mats where the fibers are too close together for cells to penetrate. Sacrificial fibers, made of very biodegradable materials such as polyethylene oxide, have been shown to create a void space for proliferating cells to eat through and inhabit. Whether tissue or precursor cells can avoid the significant stress that might result from the breakdown of large quantities of ‘sacrificial fiber’ material remains to be seen. One solution to the cell permeation problem might be to produce cell-imbued fibers. Research at University College London and Imperial College London has demonstrated how electrospinning might achieve this. Instead of signalling cells to proliferate, it appears that whole cells may be electrospun into fibers, where they can either be laid directly over tubular elements, or interspersed with other structural fibers. Research at Tokyo University has investigated cell growth over ‘nanofibrous microcapsules’, which are modular, stackable spheres of polylactic acid fibers formed through a special variation of the electrospray process, in which a very large core of vaporising water ‘blasts holes’ through the thin polymer shell. This method provides plenty of space for cellular growth, and cells avoid harmful contact with the ends of fibers. However, the deposition of encapsulated morphogens to promote tissue development may be difficult, and it is far from certain whether normally functioning tissues could be grown over fibers arranged in this way. With these opportunities and challenges in mind, it is likely that progress in artificial scaffolds will be

Bioartist Paola Antonelli created this piece, Victimless Leather, in 2008. The tiny cellular couture had been grown using tissue engineering principles

Oran Maguire is a PhD applicant and science illustrator. Engineering the rise of cell therapies

13


Tumbling into Wonderland

JOHN TENNIEL, ADAPTED

Mirlinda Ademi scrutinises the syndrome that simulates Wonderland

Lewis Carroll’s Alice`s Adventures in Wonderland celebrated its 150th anniversary just last year. Ever since Charles Lutwidge Dogson, better known by his pen name Lewis Carroll, published the children`s tale in 1865 the story and its crazy characters have served as a powerful source of inspiration for novelists, filmmakers and poets alike. “Curiouser and curiouser!” cried Alice (she was so much surprised, that for the moment she quite forgot how to speak good English); “now I’m opening out like the largest telescope that ever was! Good-bye, feet!”; (for when she looked down at her feet, they seemed to be almost out of sight, they were getting so far off)

14

Tumbling into Wonderland

While the story is an attempt to escape the audience from reality, some of the issues addressed by Carroll are not so far from particular medical conditions of our world. Just as Alice takes the reader on a journey down to Wonderland, a similar distortion of the world can be created when neural perception in the brain becomes fragmented. Have you ever considered what a life full of illusion would be like? Imagine your surroundings or even your own body to grow massively out of proportion. What would it feel like to think that you have gone through a series of metamorphic changes similar to those Carroll has Alice experience? Would the misperception in your brain drive you nuts? But this is quite a different Wonderland, a world existing in parallel to the daily lives of some people. Aptly named, Alice in Wonderland Syndrome (AIWS) is a rare neurological phenomenon discovered more than one and a half centuries ago. Affected people experience a dreadfully distorted perception of time, distance, sound and/or size. While the condition is not very well known and relatively poorly studied, it has been well noted that symptoms usually begin to manifest in childhood and mostly affect children. The term AIWS was first coined by the British psychiatrist Dr John Todd in 1955 and has therefore also been referred to as the Todd Syndrome. The AIWS is a pretty rare and transitory condition. Just like how Alice experiences metamorphosis after consuming food in Carroll`s story, AIWS patients experience similar distortions visually. Transient symptoms include visual hallucinations and perceptual distortion, where objects and body parts are perceived to be altered in various ways (metamorphopsia), together with enlargement (macropsia) or reduction (micropsia) in size. Interestingly, Carroll himself is reported to have suffered from AIWS associated to severe migraine attacks, which supposedly served as an inspiration for his writing. Carroll`s diary reveals that in 1856 he consulted an ophthalmologist about the visual manifestations he experienced regularly.

Easter 2016


Easter 2016

“But I don’t want to go among mad people,” Alice remarked. “Oh, you can’t help that,” said the Cat: “we’re all mad here. I’m mad.You’re mad.” “How do you know I’m mad?” said Alice. “You must be,” said the Cat, “or you wouldn’t have come here.”

JOHN TENNIEL

Quite contrary to what one would assume, AIWS rarely induces hallucinations. While the imagination can bear quite vivid fruit, individuals can discern that their perception is not in line with reality. The illusive episodes are fairly unpredictable, but after all relatively short. Most common misperceptions happen at night. It is suspected that the syndrome may be triggered by a combination of changes in sensory input, such as the ebbing of noise and light or chemical changes occurring in the brain as we near sleep. Curiously, some people have reported that they are able to control their symptoms by opening or closing their eyes, though the previously described sensations may be accompanied by a lingering of sensory input after the source has been removed. A diagnosis of AIWS requires that affected children report characteristic symptoms while still being “completely normal” in their development and not suffering from conditions like stroke or a brain tumour. Unfortunately, unlike other neurological conditions, AIWS does not reveal itself in MRI or CT scans. Generally speaking, AIWS is not uncommon and likely to be underestimated as a diagnostic entity, since it is symptomatic of other conditions. Several reports suggest a very strong link between migraine auras and AIWS. In fact most people suffering from AIWS experience the neurological condition during auras generated by migraine episodes. A typical migraine attack actually consists of four phases: prodrome, aura, headache and postdrome; although auras may be experienced alone. Medically, auras are defined as perceptual distortions developing within a short amount of time, which occur minutes to hours prior to migraine headaches. Although still unclear, auras are thought to be caused by ‘cortical spreading depression’, a process in which excitation of the cerebral cortex (the outer and most complex layer of the brain) is followed by a depression in neuronal activity. It is hypothesised that when this course of overexcitation is followed by a depression passing through the parietal lobe (location: upper, back part of the cortex) it leads to an altered perception just like that experienced in AIWS. This idea is strongly supported by experiments in which electrical stimulation of the posterior parietal cortex has resulted in an altered perception of body size and shape.

In addition to migraine and epilepsy, AIWS is further associated with brain tumors, Epsteinbarr-virus infections (a pathogen that causes glandular fever, also known as mono), other infections, hallucinogenic drugs, hyperpyrexia and schizophrenia. As a result, AIWS patients are often misdiagnosed as psychiatric disorders.

JOHN TENNIEL

“Well, I’ll eat it,” said Alice, “and if it makes me grow larger, I can reach the key; and if it makes me grow smaller, I can creep under the door; so either way I’ll get into the garden, and I don’t care which happens!”

Unfortunately, AIWS has no proven effective treatment due to the fact that so little is known about its cause and origin. Two theories as to why the syndrome has so little recognition are: 1) children who experience AIWS are too young to describe what they perceive precisely and rather enjoy than dread the altered perception, and 2) adults do not speak up or address medical consultants out of fear of being stigmatised and falsely discriminated against as ‘crazy’. If diagnosed, most therapies consist of first line medication for migraine management or prophylaxis and diet: antidepressant prescription, anticonvulsion prescription, nerve block injections or dietary and lifestyle alterations. The AIWS´s symptom of micropsia, which makes objects appear smaller than normal, has also been cited as an influence for Jonathan Swift`s book Gulliver`s Travels. No matter if it is the catchy name or the possible link to more serious brain diseases, the number of scientific studies and perceived relationships with creative works have increased apace over the past years. Nevertheless, the AIWS still remains something of a mystery. Mirlinda Ademi is an Amgen Scholar alumnus of the University of Cambridge.

Tumbling into Wonderland

15


AMY DANSON

Artificial Intelligence: the power of the neuron


Alex Bates looks at how neurobiology has inspired the rise of artificial intelligence

I, ROBOT

Imagining True Intelligence | Since the ancient Greeks wrote the great automaton, Talos of Crete, into myth, science fiction has tinkered much with artificial intelligence (AI) in its well stocked playground. Isaac Asimov is perhaps the most famous man-handler of sci-fi’s best beloved toy, his three laws of robotics proving highly influential. Subsequently, film has dissolved AI’s delicious possibilities and dangers into the mainstream, with 2001: A Space Odyssey’s apathetic Hal perhaps proving the most famous fictional AI in the world, the eerie scene in which it faces a reboot adding emotional colour to the debate on AI and personhood. Move into more recent years, and AI still provides a fertile spawning ground for the screenwriter. 2013 brought us Her, a film that explored the romantic relationship between a man and his disembodied operating system, Samantha, whose mind evolved and expanded way beyond human ken, tangling with thousands of lovers. 2015 saw a film about the ultimate Turing test, Ex Machina, in which a psychopathic tech bro creates an embodied, alluringly human and also rather psychopathic AI, Ava, by designing it to learn organically, getting Ava’s ‘brain’ to ‘rearrange on a molecular level’. 2015 also yielded the film Chappie, the titular character of which progresses into personhood by growing, mentally, from a toddler to a teenager before it begins to conceive of morality and human emotion. The film adroitly appreciates that achieving true intelligence is a developmental process, so why should it not be the same for robots? It is clear that the type of intelligence writers are envisioning here is quite different to what we mean by intelligence in everyday parlance. Intuitively, one might expect that tasks humans find easy, for example identifying objects, moving our arms and conveying meaning through language, might also be easy to program in machines, unlike tasks that humans find difficult, such as beating a grandmaster at chess. However, Charles Babbage’s 1849 analog general computing engine gave the first clear demonstration that number churning was going to be a relatively easy task, while the victory of IBM’s Deep Blue over then reigning chess grandmaster Gary Kasparov in 1997 showed that computers can operate within a framework of rules to superhuman levels. Clearly, computers do many things already that are pretty damned clever. Think to even more basic machines, and you will swiftly realise that your washing machine is capable of computations within microseconds that would challenge

Detective: Human beings have dreams. Even dogs have dreams, but not you, you are just a machine. An imitation of life. Can a robot write a symphony? Can a robot turn a...canvas into a beautiful masterpiece? Sonny: Can you?

THE MATRIX

ELIZA WOLFSON

FOCUS

Agent Smith: Never send a human to do a machine’s job.

Focus 17


HAL: I’m afraid. I’m afraid, Dave. Dave, my mind is going. I can feel it. I can feel it. My mind is going. There is no question about it. I can feel it. I can feel it. I can feel it. I’m a... fraid. [Reboots] Good afternoon, gentlemen. I am a HAL 9000 computer. I became operational at the H.A.L. plant in Urbana, Illinois on the 12th of January 1992. My instructor was Mr. Langley, and he taught me to sing a song. If you’d like to hear it I can sing it for you. Dave Bowman: Yes, I’d like to hear it, HAL. Sing it for me. HAL: It’s called “Daisy.” [sings while slowing down] HAL: Daisy, Daisy, give me your answer do. I’m half crazy all for the love of you. It won’t be a stylish marriage, I can’t afford a carriage. But you’ll look sweet upon the seat of a bicycle built for two.

18 Focus

2001: A SPACE ODYSSEY

FOCUS

you, yet we do not think of this as ‘true’ AI (or even as a computer!). Populist AI writer Pamela McCorduck has noted that “it’s part of the history of the field of artificial intelligence that every time somebody figured out how to make a computer do something…there was a chorus of critics to say, ‘that’s not thinking’.” But science fiction goes the whole hog, it coalesces the fabric of what it means to be human and tries to weave it anew in its AIs. It dreams of something with cognition, even consciousness, a ‘general’ AI in the sense it can cross compute between tasks and think in the generalising manner of a human. And it is a holy grail within grasping distance. Disembodiment | Mind and matter are often completely uncoupled in computer science and robotics. Though initially inspired by the brain, writing AI algorithms has very little to do with brain cells (other than the employment of the programmer’s own grey matter!). The first AI systems therefore substituted biology for pure human cleverness. All of us interact with AIs on an almost daily basis, although not ones with human intelligence. A good example are e-mail filters. Spam filters perform an intelligent function insofar as they employ machine learning to keep away instant millionaire schemes, Russian women looking for kind Western husbands, and Nigerian Princes. Broadly, they do this by examining the e-mails that you, the user, designate as spam or not-spam, and break down their content into key features that can then be used to automatically filter incoming e-mail. This breed of AI, that does not try to mimic a real human brain, has proven incredibly commercially successful. The 1990s saw it applied to many specific tasks, from those in logistics to medical diagnosis; tasks in which value can be obtained from sifting through huge amounts of data intelligently. This has allowed us to make great leaps in virtual personal assistants, leading to the development of Siri, Alexa and Cortana, and the

death of the Microsoft paper clip thing. These assistants creep a little closer to a general artificial intelligence. In the case of these ‘narrow AI’, like the spam filter, if the rules of the game were suddenly to change rather quickly – you have been chosen to get £250 million into your HSBC account to help a Wall Street banker in trouble instead of the usual exiled prince, a scheme that most humans would instantly recognise as spam even if known keywords are absent – the AI will initially get the wrong answer. Although a learning algorithm may subsequently send the baloney peddlers to your spam folder if you filter the first few manually, it has not taken the general sense of what a spam e-mail is, and applied this general understanding to new situations. A common approach has been to attempt to reduce human intelligence to a symbolic representation, and from this build something like an artificial general intelligence. A symbolic representation is one that can be translated and made understandable by a human, and many of the AI programmes that employ them follow, essentially, “if this, then this is this – if this, then do that” rules. The success of these approaches in the mid 1990s in developing AI systems that could answer geography questions posed naturally by humans, suggest chemical structures to organic chemists and diagnose infectious blood bourne diseases, eclipsed fledgling bio-inspired fields like cybernetics. Ultimately, incorporating greater generality like this has seen the evolution of Microsoft’s paperclip into Microsoft’s Cortana, and may yet see it through to something like Her’s Samantha. Fuelled by Moore’s law, the theoretical two-yearly doubling in the computing power of a chip, these AIs could be extremely computationally powerful, as well as empathetically savvy. However sheer human cleverness cannot completely circumvent the aegis of biology. The general computation of the nervous system in learning from experience and transforming sensory input into motor output is difficult for smart-bots to achieve in an Easter 2016


FOCUS

organically intelligent way with purely symbolic AI. So maybe we should move from a system where we can write and understand all the rules, even if the results surprise us, to one which is ‘subsymbolic’, like Ava, whose operation cannot be read by humans, and can be as much of an enigma as the human brain itself.

HER

Artificial Neural Networks | Symbolic AI might have been the more precocious nest-mate, but its shortcomings lead to a renewed interest in neuroscientific solutions. Brains use networks of brain cells to be clever, so why can computers not do the same? After all, if we try to simulate a brain with enough detail, will we not just create a brain? Artificial neural networks are connected arrays of computing units, usually implemented in software on conventional computers. They were originally simplified mathematical descriptions of real, biological neurons – the computing units of the brain, spinal cord and sensory and motor inlets and outlets of organismal nervous systems. Originally models of neural function were simple because we could not model more complex things. Individual ‘neurons’ are often modelled as units that can either be on, and influencing connected neurons to which they output, or off, depending on the degree of input from their incoming connectors. This ignores many realities of biological neuronal function: the various types of stimulation one neuron can receive from another, how the morphology of a neuron changes the way it works, how they ‘fire’ in ‘spikes’ of activity with different patterns over space and time, how their connections change, etc. Adding some of these details can make our artificial networks better. It can make them learn by changing how they connect in response to environmental cues. Donald Hebb’s learning rule, which effectively states that neurons that “fire together wire together”, is the most commonly used

method, strengthening connections between linked neurons that activate simultaneously. Learning can also be made to occur by reinforcement, with ‘rewards’ and ‘punishments’ given to a network for performing well or badly. The study of artificial neural networks is essentially split by two interconnected streams: “how does the nervous system work” and, “how can we make machines that are intelligent?” While neither problem is solved their tributaries have been extensively studied, yielding fruit from driverless cars on one end of the comp-bio spectrum to models of clinical neuropathologies on the other. One end has thrived off of getting AI programmes to do things that are useful (and biological accuracy be damned!), while the other has gleaned much from looking at the electrical properties of real, individual neurons, and applying the toolkit of molecular genetics. Somewhere in the middle live the once rare but increasingly populous species of academic, the connectionists, who attempt to draw wiring diagrams of real nervous systems or cognitive processes and ultimately, see if they can make them run in silicon. Even strikingly simple neural structures can do seemingly incredible things. A ‘perceptron’, which in its simplest form is a single artificial neuron with two inputs, can distinguish between two simple scenarios – choose the pill blue or red? John Hopfield’s interconnected, symmetric wheel of neurons can actually store ‘associative memories’. For every pattern of neurons a user activates in the wheel, a new and unique pattern can be yielded or completed – that is essentially what your brain has done when I mentioned blue and red bills, it yielded an associated pattern of letters, The Matrix. Hopefully. But much more can be done. Deep learning is a neural network AI approach that uses many layers of simulated neurons whose connection strengths change with experience, so that activation patterns

Easter 2016

Samantha: It’s like I’m reading a book... and it’s a book I deeply love. But I’m reading it slowly now. So the words are really far apart and the spaces between the words are almost infinite. I can still feel you... and the words of our story... but it’s in this endless space between the words that I’m finding myself now. It’s a place that’s not of the physical world. It’s where everything else is that I didn’t even know existed. I love you so much. But this is where I am now. And this is who I am now. And I need you to let me go. As much as I want to, I can’t live in your book any more. Focus 19


CHAPPIE

FOCUS

EX MACHINA

Chappie: I’ve got blings? ... I’ve got blings!

Nathan: One day the AIs are going to look back on us the same way we look at fossil skeletons on the plains of Africa. An upright ape living in dust with crude language and tools, all set for extinction.

20 Focus

that result from tens to potentially millions of neurons talking to one another in the network are adaptable. However, between input and output, the network’s behaviour is unclear – a black box system. They can do some pretty impressive things, even if we cannot always see how. Google’s DeepMind algorithm’s victory over a world champion this year at the ancient Chinese game of Go actually involved something akin to intuition since unlike chess the number of possible states of a Go board outnumbers the atoms in the known universe. The DeepMind algorithm reached its prowess by generalising and truly strategising. Unfortunately though, it cannot generalise what it has learned in the Go environment, for example it cannot verbally tell us about or conceive of what it has learned. Therefore it does not really demonstrate true cognition. Lee Se-Dol, the current human world champion, does, and commented at a press conference before his defeat “I believe human intuition and human senses are too advanced for artificial intelligence to catch up. I doubt how far [DeepMind] can mimic such things”. Here, human intuition failed to intuit intuitiveness of computer intuition. Though, admittedly, unlike the algorithm, Lee Se-Dol did not get to play himself 30 million times before the match, a fact that really highlights the computational difference between human and artificial intelligence. So, how do we get a cleverer neural net, especially when we are not that sure how it actually works? Well, how did biology do it? The human brain was carved by impassive millennia of evolution. We can improve artificial neural networks in the same way, quite literally evolving them by natural selection, taking the best bits of competing algorithms, and passing them on to new generations that continue to compete with one another. A bit of developmental biology can even be thrown into the mix, with some researchers claiming better performance if they allow neural networks to literally ‘grow’ – branching out their connections and ‘dividing’ like real cells over time, based on the activity level of each unit. However, while complex reasoning and language processing have been demonstrated by a handful of artificial neural network to date, true cogniton has not. Moreover, even artificial neural networks require a lot of programming and electricity. Chappie and Ava both had an actual body, and the chips that allow us to put AI ‘minds’ into robotic ‘bodies’ are starting to come up against fundamental performance limits. Moore’s law is bending closer to its asymptote. In 2012 Google developed a piece of AI that recognised

cats in videos, without ever explicitly being told what a cat was, but it took 16,000 processors to pull off. Neuromorphism | While not completely untrue of conventional computation, biological neural networks have to deal with strong resource limits. Problems of resource delivery and allocation, space constraints and energy conservation are all much weaker for artificial neural networks, for which we can just add more computers. These limits have undoubtedly determined much of the physical layout, and therefore logic, of real nervous systems. Real neurons are also analog, with a continuous output scale, not digital and so binary like modern computers. Therefore, in many ways their actual function more closely resembles the plumbing in your kitchen than the Mac in your study. Modern computers are ‘von Neumann machines’, meaning that they shuttle data between a central processor and their core memory chips, an architecture more suited to number crunching than brain simulation. Neuromorphic engineering is an emerging field that attempts to embed artificial neural networks in unorthodox computing hardware that combine storage and processing. These neurons, like biological ones, communicate to each other in ‘spikes’ and the ‘strength’ of their activity is denoted by the frequency of these spikes. As Demiurge, a Swiss-based start-up that aims to build AI based on neuromorphic engineering, puts it on their flamboyant website “deep learning is a charted island in the vast uncharted waters of bioinspired neural networks. The race of discovery in artificial intelligence only starts when we sail away from the island and deep into the ocean.” Demiurge has received £6.65 million in Angel investment to try to develop a neuromorphic AI system, which they plan to ‘raise’ like an infant animal. They claim that “the blindness of deep learning and the naiveness of [learning from reward/punishment] prohibit” both approaches from generating basic consciousness, which they see as enabling “spatiotemporal pattern recognition and action selection”. They want the system to learn consciously, like Chappie, effectively growing up from a baby into something analogous to personhood. The Blue Brain Project, however, aims to simulate bits of real brain. Just pause and think how hard it is to tackle such a problem. Here, you have a dense block of tissue, where, in the case of the rat cortex for example, approximately 31,000 neurons can be packed into a volume the size of a grain of sand. In a tour de force study last year in Nature, Henry Markram and colleagues simulated this very Easter 2016


Easter 2016

much abstraction. As Peter Dayan at the Gatsby in London has put it to Nature, “the best kind of modelling is going top-down and bottom-up simultaneously”. In the AI world, this is more or less what neuromorphic engineering does. The way robots learn about the world, taking advantage of many sources of sensory input and unravelling it with their neuromorphic chips, could inform the next stage of smartphone devices, just as the current generation was bio-inspired. Future chips may enable smartphones to instantly recognise people you know in your photos, anticipate who you will meet in the day and automatically photograph people who walk into your kitchen only if they steal your coco pops. In reality, the AI of the future can hope to take cues from both neuromorphic engineering and von Neumann machines, being something that can think like a human and a computer. Now that is something that would be terrifyingly useful.

Lunkwill: Do you... Deep Thought: Have an answer for you? Yes. But you’re not going to like it. Fook: Please tell us. We must know! Deep Thought: Okay. The answer to the ultimate question of life, the universe, and everything is...

The Advent and End of Thinking | As with any new technology, there are many concerns surrounding AI. Irving Good, who worked [wild cheers from with Alan Turing in Bletchley Park, noted in audience, then silence] 1965 that “the first ultra-intelligent machine is Deep Thought: 42. the last invention that man need ever make”, because from then on mankind’s thinking can be outsourced. Should this technological singularity be reached, however, the dangers are highly unlikely to deliver a Matrix-type situation. But it is something that very much worries people, including lauded physicist Stephen Hawking and tech entrepreneur Elon Musk, who signed an open letter against military AI research, calling it an ‘existential threat’. A commonly envisioned scenario is one in which such an AI does everything in its power to prevent being shut down so that it can continue improving its functionality (Hal: “Dave, stop. Stop, will you? Stop, Dave. Will you stop Dave? Stop, Dave.”). Interestingly, the reaction of the CEO of Demiurge, Idonae Lovetrue, was to put out an advert to hire a science fiction writer. Why? She told BlueSci that “science fiction is one of the least invasive and most effective simulators to study and shape the multidimensional implications of AI technologies...it is crucial to thoroughly and responsibly test both [AI] technology via robotic simulators and its implications via such social simulators...” Alex Bates is a 1st year PhD student in Neuroscience at the MRC Laboratory of Molecular Biology. Focus 21

ORAN MAGUIRE

brain grain based on twenty years of accumulated, extremely detailed data. Unfortunately, despite being the largest effort of its kind, the simulation did not glean anything completely novel about the function of neural microcircuits. “A good model doesn’t introduce complexity for complexity’s sake,” Chris Eliasmith at the University of Waterloo noted to Nature. Instead of Markram’s ‘bottom-up’ approach Eliasmith has tried to lower himself down into the biological abyss from on high. He makes use of ‘semantic pointer architecture,’ the hypothesis that patterns of activity amongst groups of neurons impress some semantic context, and that an assembly of these ensembles compose a shifting pattern with a specific meaning. The interplay between these patterns is what he claims underlies cognition. It is all more than a little abstract – but then again thinking back to that tiny chunk of brain meat, how could it not be? A 2012 study in Science used this principle to move a robotic arm to provide answers to complex problems assessed through a digital eye, using 2.5 million artificial neurons to do so. Markram was a little dismissive at the time: “It is not a brain model”. Fair enough, the attempt was not grounded in real neuroanatomy, but it had managed to build an AI from biological principles. Eliasmith has since added some bottomup detail and concluded that greater detail did not improve his AI’s performance in this case, just increased the computational cost. Modern technology has allowed us to gain much from grains of brain. The burgeoning connectionist field has been using genetic techniques to highlight specific neural circuits with fluorescent jellyfish proteins, and electrical techniques to see which brain areas influence which, and in so doing, try to draw up interaction webs. Doing this at extremely high resolution is, however, very difficult. Only one entire nervous system has ever been fully reconstructed at a sufficient resolution to account for all connections between all neurons, that of the ‘worm’, Caenorhabditis elegans. It took over 10 years to fully trace the connections of the mere 302 neurons in this animal, and after five years a second effort is half way through finding all neural connections in a fruit fly larva, which has about 10,000 neurons. The human brain has about 86 billion neurons. Just to store that amount of data at such a resolution would take about a zettabyte. In real terms that is approximately 1,000 data centres of crème de la crème Backblaze data storage pods, which, stacked into four stories, would take up about the same area as central Cambridge. So, while theoretically simulating a real brain would create a brain, we can only attempt it with

THE HITCHHIKER’S GUIDE TO THE GALAXY

FOCUS


FRÉDÉRIQUE VOISIN-DEMERY

Just Your Cup of Tea Sophie Protheroe examines the global history of tea and its effect on our health

Tea has become a quintessentially British symbol. As a nation, we have been drinking tea for over 350 years. However, tea has endured a tumultuous journey to reach its status as the nation’s favourite beverage. Originating in China, where it was thought to have medicinal properties, tea’s history is closely intertwined with the history of botany and herbal medicine. Legend states that the very first cup of tea was drunk in 2737 BC by the Chinese emperor Shennong, believed to be the creator of Chinese medicine. Shennong was resting under the shade of a Camellia sinensis tree, boiling water to drink when dried leaves from the tree floated into the water pot, changing the water’s colour. Shennong tried the infusion and was pleased by its flavour and restorative properties. A more gruesome Indian legend attributes the discovery of tea to the Buddha. During a pilgrimage to China, he vowed to meditate non-stop for nine years but inevitably fell asleep. Outraged by his weakness, he cut off his own eyelids and threw them to the ground. Where they fell, a tree with eyelid shaped leaves took root, the first tea tree.

22

History

BARTA IV

MAKEDOCREATIVE

Camellia sinensis is native to Asia, but is now cultivated around the world in tropical and subtropical regions

Regardless of the truth behind the legends, tea has played a pivotal role in Asian culture for centuries. The earliest known treatise on tea is ‘Ch’a Ching’ or ‘The Classic of Tea,’ written by the Chinese writer Lu Yu, which describes the mythological origins of tea, as well as its horticultural and medicinal properties, and contains prolific instructions on the practice and etiquette of making tea. This was considered a highly valued skill in China and to be unable to make tea well and with elegance was deemed a disgrace. Tea was thought of as a medicinal drink until the late sixth century. During the T’ang dynasty between the seventh to tenth centuries, tea drinking was particularly popular. Different preparations emerged, with increasing oxidation producing darker teas ranging from white to green to black. Other plant substances were added, including onion, ginger, orange or peppermint, and unique medicinal properties were ascribed to different infusions. Over time, tea was no longer restricted to medicinal use and was also generally consumed as a beverage. Tea came to Europe in the late 16th century during the Age of Discovery, a time of extensive overseas exploration. Natural philosophers discovered many new plants which they collected and used for medicines or for general consumption. Of particular interest were plants with stimulant properties, such as tea, coffee, chocolate, tobacco and ginseng. Europeans learned of the medicinal uses of plants from local people. However, Asians remained sceptical that the healing properties of tea would have any effect on the health of Europeans, claiming that the medicinal value was unique to Asians. Portuguese merchants were the first to bring home tea (known to them as ‘Cha,’ from the Cantonese slang) from their travels in China. However, the Dutch were the first to commercially import tea, which quickly became fashionable across Europe. Tea came to Britain in the 17th century and its popularity stems from Catherine of Braganza, a Portuguese princess and tea addict, the wife of Charles II. Her love of tea made it fashionable both at court and amongst the wealthy classes. Due to high taxes, tea remained a drink of the wealthy for many years. In the

Easter 2016


Easter 2016

Eventually, tea regained popularity as philanthropists realised the value of tea drinking in the temperance movement, offering tea as a substitute for alcohol. During the 1830s, many coffee houses and cafes opened as alternatives to pubs and inns. From the 1880s, tea rooms and shops became popular and fashionable. Today, tea remains the most widely consumed beverage in the world. It has been estimated that tea accounts for 40% of the daily fluid intake of the British public. So, is this lavish consumption affecting our health? A study at Harvard University Medical School suggests that tea may have health benefits; tea contains substances linked to better health, such as polyphenols, which are especially prevalent in green tea. Polyphenols have antiinflammatory and anti-oxidant properties which could prevent damage caused by elevated levels of oxidants, including damage to artery walls which can contribute to cardiovascular disease. However, these effects have not been directly studied in humans and it may be that tea drinkers simply live healthier lives. Herbal medicines are generally less well characterised and less likely to be tested in systematic trials compared to Western drugs and to date there is no conclusive evidence suggesting tea has any genuine effects on health, either positive or negative. It seems that the controversies surrounding the medicinal use of tea may be little more than a storm in a teacup. The leaves of Camellia sinensis are used to make tea

BIODIVERSITY HERITAGE LIBRARY

18th century, an organised crime network of tea smuggling and adulteration emerged. Leaves from other plants were used in the place of tea leaves and a convincing colour was achieved by adding substances ranging from poisonous copper carbonate to sheep’s dung. When tea was introduced to Britain, it was advertised as a medicine. Thomas Garraway, owner of Garraway’s coffee house in London, claimed that tea, ‘maketh the body active and lusty’ but also ‘… removeth the obstructions of the spleen…’ and that it was ‘very good against the Stone and Gravel, cleaning the Kidneys and Uriters.’ The Dutch doctor Cornelius Decker profusely prescribed the consumption of tea, recommending 8 to 10 cups per day and claiming to drink 50 to 100 cups daily himself. Samuel Johnson was yet another doctor known to indulge in excessive tea drinking, rumoured to have consumed as many as sixteen cups at one tea party, and was an avid defender of the health benefits of tea. In 1730, Thomas Short performed many experiments on the health effects of tea and published the results, claiming that it had curative properties against ailments such as scurvy, indigestion, chronic fear and grief. However, the health effects of tea were debated and by the mid 18th century accusations that tea was detrimental to health were brewing. Wealthy philanthropists worried that excessive tea drinking amongst the working classes would cause weakness and melancholy. One French doctor warned that overconsumption of tea would result in excess heat within the body, leading to sickness and death. John Wesley, an Anglican minister, condemned tea due to its stimulant properties, stating that it was harmful to the body and soul, leading to numerous nervous disorders. Wesley even offered advice on how to deal with the awkward situation of having to refuse an offered cup of tea. Jonas Hanway believed that tea-drinking was a risk to the nation, leading to declining health of the workforce. He was particularly concerned about the effect on women, warning that it made them less beautiful. Arthur Young, a political economist, objected to tea because of the time lost to tea breaks. He criticised the fact that some members of the working class would drink tea instead of eating a hot meal at midday, reducing their nutritional intake. Tea replaced the traditionally working class drink of home brewed beer, which had a higher nutritional value than tea; tea contains no calories without milk or sugar. Thomas Short, a Scottish doctor, claimed that tea caused disastrous ailments and argued that people would spend money on tea over food. In reality, the working class often bought very cheap grades of tea or once-used tea leaves from wealthier families.

Sophie Protheroe is a 3rd year undergraduate studying Zoology at Murray Edwards College.

History

23


THIERRY EHRMANN, DREAMSCOPE ADAPTED

Judged By Your Genes Katherine Dudman introduces genetic discrimination, the sly cousin of racism and sexism

Have you ever spared a thought for the value of the information encoded in your genes? Have others? We are constantly bombarded with headlines about the latest research to link behaviour, appearance or disease to variations in our genetic makeup. You may even have considered your own chances of developing cancer, escaping Alzheimer’s or making it through to old age. But how would you feel if your genetic information was also responsible for reduced employment prospects or disadvantaged you financially through expensive insurance premiums? In an age where whole genomes can be sequenced in hours and data disseminated instantly, the question of who has access to personal genetic information and how they are allowed to use it is becoming increasingly important.

In my opinion, the most important question to arise from consideration of the issue is this: should healthy individuals be treated differently as a result of their genetics? Instinctively, most of us would think “no”, but if we ponder the question in more detail we start to see that simple one word answers do not suffice. The question in relation to employment is perhaps the most clear-cut. An employer may be less likely to hire someone if that person is known to have an increased risk of disease or disability at some point in their working life. However, it would be generally agreed that employment opportunities should not be limited on the grounds that an individual, currently in good health, has the potential to develop a disease in the future, particularly when there is no guarantee

At present, genetic discrimination is poorlydefined and subject to varying interpretations. For example, it can be used to refer to discrimination against asymptomatic carriers of a genetic disease, as well as against those who are genetically predisposed to certain diseases, disabilities or behaviours. Unfortunately, there is scope for genetic discrimination to become a wider social problem, in which case the term may come to refer to something all of these groups of people may come to experience. However currently it is fair to say that the most contentious issue relates to the rights of individuals with regard to employment and the insurance industry. In this context, the most relevant definition would include people with genetic predispositions to disease or disability.

that an illness will ever arise. A theoretical exception to this would be if a genetic test revealed a characteristic that rendered an individual incapable of carrying out a job to required standards. As of now however, there are no jobs for which genetic prerequisites exist. A more difficult question would be deciding whether there should be any jobs or diseases exempt from the general rule of disregarding future disease risk. For instance, should a potential pilot be allowed to train if a genetic test reveals a significantly increased risk of sudden cardiac death syndrome? Even if we are to accept such cases, it would seem that at the very least, employers should be prohibited from using, or perhaps even obtaining, non-job-related information. The question in relation to the insurance industry is a little more complex. The whole premise of insurance

ORAN MAGUIRE

In the US, The Genetic Information Nondiscrimination Act of 2008 was signed by George Bush and prevents genetic discrimination from health insurers and employers

24

Perspective

Easter 2016


Easter 2016

However, there is no guarantee that this Moratorium will be renewed past 2019 and interpretation of existing legislation in the courts could take a long time to reach any meaningful conclusions. It would therefore be prudent, as recommended by the Disability Rights Commission and Human Genetics Commission as early as 2002, for the original Disability Discrimination Act to be extended to include people with a “genetic predisposition to an impairment”, particularly with regard to insurance. It could ultimately be said that there is insufficient evidence of genetic discrimination occurring at present to merit the enactment of protective legislation. However, I would argue that potential for discrimination is sufficient to justify pre-emptive protection; the law need not always be reactive. Concerns about discrimination do exist, particularly among people with family histories of hereditary conditions. Furthermore, as the cost of genome sequencing continues to decrease and research into the genetic basis of disease progresses, new discoveries will reveal more disease-associated gene variants. Consequently, genetic discrimination and its associated issues will likely become a concern of increasing importance to large sectors of the population. After all, it is likely that most of us will have some risk factors hidden in our genes. It would therefore be wise to act now for the protection of future generations.

THIERRY EHRMANN

Katherine Dudman is a 3rd year Biochemistry student at King’s College. Genome sequencing is getting ever cheaper. The first ‘thousand dollar genome’ was delivered by Illumina in 2014, with 30x coverage

THIERRY EHRMANN

relies on the accurate determination of risk. Should companies not be allowed to utilise all the information available in order to most precisely assess that risk? There is an argument against “genetic exceptionalism” which asks why genetic information should be treated differently to other types of available medical data. After all, records such as family histories of disease are also, technically, a form of “genetic information”. Perhaps a more serious problem arises when individuals who could potentially benefit from a diagnosis avoid genetic testing out of fear that the results could be used against them. Restrictions on the obligation of individuals to reveal such data would therefore seem sensible, even if in future it becomes compulsory to disclose some genetic information to certain employers. There are issues, too, with the reliability of genetic tests themselves. It is rare that a single genetic marker alone is able to clearly predict the risk of future disease or disability to an extent that could justify differential treatment. As science progresses, genetic sequence information is likely to play a smaller role as we become able to identify a more complex interplay of environmental and developmental factors when predicting disease risk. Unless tests can be independently verified as reliably assessing this risk, it would seem unfair to allow employers or insurance companies to insist on their use or to have access to the test results in order to justify differential treatment. So what can be done to protct individuals against potential genetic discrimination? One would think that the law would be an effective tool for protecting these individuals from unfair treatment on genetic grounds, in the same way that racial or sex discrimination is prohibited. Yet in the UK there is currently no specific legislation addressing genetic discrimination. Disputes of this nature would fall under the “Disability Discrimination Act” of 1995 or the EU Employment Directive of 2000 which prohibits “direct or indirect discrimination” on the grounds of disability. It has been suggested that these pieces of legislation would be sufficient to cover genetic discrimination cases, based on the assumption that unfair treatment results from the perceived risk that an individual will develop a disability from genetic predictions. This is thought to effectively constitute discrimination “on the grounds of disability”. Furthermore, the UK Government and the Association of British Insurers entered into a voluntary “Moratorium” in 2005 which states, among other provisions, that customers should not be required to disclose the results of genetic tests for policies up to certain values, and that the relevance of predictive tests should be independently determined by the Human Genetics Commission.

Perspective

25


Harry Lloyd ponders our duty to think ahead of technological progress

RICHARD UNTEN

Regulation and Foresight As we marvel at the latest gadgets, technology is already working behind the scenes to bring us the ‘Next Big Thing’. It has a sly habit of developing rapidly, but never making leaps so big we collectively stop to think about where it’s all headed. Like a toad in a slowly heating bath of water, we might not know we are being boiled until it is too late. In some areas, like medicine, tough codes of ethics force researchers to think before they innovate, but outside this bubble of hyper-self-awareness, the advent of new technologies themselves sets the pace of public debate. This afterthought approach leads us into hot water, and as with most of humanity’s inventions, misuse of new technology inevitably leads to conflict of some sort. To avoid these problems, thought must be put into the legal status of these developments long before we are buying them off the shelves. But how do we go about drafting clear legal frameworks that can handle innovation without being outpaced by it? The problem seems to be one of predicting the future, the unenviable task of foreseeing all inventions in the next hundred years. Fortunately this isn’t the case. As new technologies like drones and driverless cars appear, and old fields like space exploration develop, it may not be possible to guess at every individual breakthrough ahead of time, but most certainly the key ones should be dealt with. A recent New Scientist article on transcranial direct current stimulation, a method of activating certain brain regions with electricity, mentioned the ethics of using it as a performance enhancer for sport. It shied away from the issue, calling it “a debate for another day”. This approach makes it too easy to delay the conversation indefinitely until something goes seriously

ORAN MAGUIRE

Creating an in-space fuelling economy by converting water in asteroids to fuel is one area that may need regulation in the far future

26

Science and Policy

wrong. Our current struggle with climate change is a prime example. So why are we so bad at preparing for the future? According to researchers at Stanford and Princeton, it may be because we view our future selves in a similar way as we do strangers. Combine that with a strongly evolved tribal instinct and it is not hard to see why we continue to struggle when it comes to planning ahead. We are all aware of how easy it is to put off important work, when we know hindsight will show us it should have been done earlier. Does this make it any easier to empathise with your future self the next time it happens? The answer is a resounding no, and this is where problems surface. Space exploration is one of these technologies for which we struggle to plan. One of its greatest prospects is the extraction of metallic and organic resources from asteroids. Based on current estimates, there is enough mineral wealth in the solar system to supply our current requirements for tens of millions of years. Yet the most wide-reaching document we currently have, the United Nations Outer Space Treaty, is decidedly unconstructive. One of the declarations that lays its groundwork explicitly states “no one nation may claim ownership of ... any celestial body”. It is difficult to see how any kind of commercial expansion into outer space can happen without the basic property rights this declaration bans. Given that the lead in this are is currently being taken by private enterprises like Blue Horizon and Virgin Galactic, the existing laws may well prove a chokehold on development until they are revised or simply ignored. Luxembourg and the United States have already begun this process, with President Obama signing the Commercial Space Launch Competitiveness Act last year. In direct contradiction of the United Nations Declaration, this gives U.S. firms rights to “possess, own, transport, use, and sell” any asteroid resources they can extract. If these opposing views came to court over a real issue, a lengthy legal battle would no doubt ensue. If we’re able to settle these matters before the necessary technological advances are mad, this simply won’t be an issue. Meanwhile Luxembourg has declared significant state sponsored benefits for companies in the space industry, including a 45% rebate on any research and development expenses. While the

Easter 2016


Easter 2016

where driverless cars enable all those who currently don’t have a license – including the infirm, children and a third of women – to travel freely. The question of liability in the case of an accident has even been partly addressed, with a California ruling stating that the car itself can be the driver. This opens the way for companies like Google to assume liability in case of an accident, something they have always said they would do. As such, responsibility can be removed from the owner, avoiding the issue of them being blamed for a crash caused by the car. The problem here lies more with the specifics of the algorithms the cars will use in the case of accidents. In I, Robot, the film spun out of Asimov’s groundbreaking short story collection of the same name, Will Smith’s character is saved from drowning instead of a young girl. The android that rescues him calculates his chance of surviving is 45% and the girl’s 11%, deciding on balance to save him. This is known in ethics as a trolley problem, and has been hotly debated since the late 1960s. We might imagine a child running in front of a car carrying five people. If the car swerves and crashes, it puts the occupants in danger, but if it does not the child will be hit. The algorithms used in situations like this will need to be transparently developed and relentlessly tested before driverless cars are allowed to take to the road. Maybe it takes a pessimist to recognise the potential for danger in our creations. While affected industries may cry foul at regulatory measures if they slow progress, getting people to think about the potential impacts can only be a good thing. When the risks are at best loss of life and at worst, in the case of space exploration, international tension on a scale not experienced since the Cold War, the incentive must be found to look before we leap.

SMOOTHGROOVER22

space tourism industry will no doubt be lucrative, no business is going to commit itself to the outlay required to prepare an asteroid for mining if there is no guarantee of ownership. To counter this confusion the International Institute of Space Law, a global consortium of experts from space-faring nations, has clarified its position on the subject of property. While it does not see the United Nation’s position as a ban on private acquisition of space resources, it has called for its thorough reworking in search of “legal certainty in the near future”. Closer to home, the increasing numbers of civilian drones and our steady progress towards the driverless car present more immediate problems. The first remote controlled drone saw daylight 100 years ago across the English Channel, yet even now we’re still waiting for a government strategy, promised in late 2016, to improve drone safety in the UK. There are now models on the market that can carry over ten kilograms and fly for more than thirty minutes. It takes no great leap of imagination to see that these could wreak havoc in the wrong hands. The hobbyists’ market too represents a time bomb. In the last year there were thirty mid-air incidents involving drones, up from six in 2014. In December alone five of these were classed in the highest risk category, including near misses involving a Boeing 737 at Stansted and a Boeing 777 at Heathrow. In April, one hit a British Airways plane, albeit harmlessly. Drones currently require no training to use nor do they need to be registered, and despite the number of incidents there have only been two convictions for dangerous flying. One involved entering restricted airspace above a nuclear submarine base, the other a fly-over of Westminster. Again, we can look to the US for a way forward. On the 21st December 2015, anticipating large numbers being given as Christmas gifts, the Federal Aviation Authority introduced a mandatory registration system for all drones, with some 300,000 accounted for by the end of January 2016. This instils some responsibility in owners, making them easier to trace in the case of an accident, or of misuse. The question remains whether this should be brought back to the point of sale, with every new owner registered when they buy to ensure no new pilots slip through the net. The regulatory issue is especially poignant for Cambridge, where Amazon opened a research centre in 2014 to explore the possibility of replacing traditional delivery systems with drones. In this instance, the UK was chosen precisely because of our lax regulations. The legislation we have in place for driverless cars seems reasonably developed by comparison. The insurance system is willing to cover them. They can be legally tested on Britain’s roads, with one site nearby at Milton Keynes. The government is envisioning a future

Driverless cars could enable those without a license to travel freely

Harry Lloyd is a 3rd year Natural Scientist studying Chemistry at Emmanuel College.

Science and Policy

27


Science, Mathematics and Poetry

STEVE JOHNSON

Robin Lamboll wades into an age old battle

‘The aim of science is to make difficult things understandable in a simpler way; the aim of poetry is to state simple things in an incomprehensible way. The two are incompatible’ – Paul Dirac

‘A poem is that species of composition which is opposed to works of science, by proposing for its immediate object pleasure, not truth’ – Samuel Taylor Coleridge

“To describe an equi– –lateral Tri– –A, N, G, L, E. Now let A. B. Be the given line Which must no way incline;” – Samuel Coleridge “These transient facts, These fugitive impressions, Must be transformed by mental acts, To permanent possessions. Then summon up your grasp of mind Your fancy scientific Till sights and sounds and thoughts combine Become the truth prolific” – Clark Maxwell

28 Science and Art

The battle lines are drawn up, epitomised by the first two quotes beside this article. But the speakers’ lives themselves undermine the dichotomy they try to paint. Dirac was very much a theoretical physicist, and had little regard for the complexity of real data, compared to an elegant mathematical equation. “It seems that if one is working from the point of view of getting beauty in one’s equations, and if one has really a sound insight, one is on a sure line of progress. If there is not complete agreement between the results of one’s work and experiment, one should not allow oneself to be too discouraged,” he wrote in an article in Scientific American. So according to Coleridge’s definition his mathematics was more poetry than science. Coleridge, too, overstates his opposition to science: he was well known to attend science lectures, to “renew my stock of metaphors.” In terms of maths, one of Coleridge’s less-known works is a poem written to his brother, describing in rhyme how to construct an equilateral triangle. He pleaded forgiveness: “In the execution of it much may be objectionable… I have three strong champions to defend me against the attacks of Criticism: the Novelty, the Difficulty, and the Utility of the Work.” He was not being modest - it featured lines such as those to the left, and is not amongst his more famous works. (Also, his request that the line “must no way incline” is entirely unnecessary mathematically.) He is also exaggerating in his claim of novelty – as well as number-heavy hymns from Ancient Sumeria, we know there was mathematical poetry in Ancient Greece. Archimedes’ Cattle Problem, reputedly written by Archimedes in 251 BC, was phrased as a poem. This maths problem asks the reader to calculate the total number of cattle belonging to the sun god Helios, given a list of fractions relating the number of cows and bulls of different colours. There are not enough simultaneous equations to give a single solution, but a set of solutions can plausibly be calculated. The poem then asks if the reader can solve the same problem, but with extra conditions requiring some answers to be square or triangular numbers. “If thou discovers the solution of this… go and exult as a conqueror… thou art by all means proved to have abundant knowledge of this science”. You should indeed – merely writing down the answer is very impressive. Computer-aided efforts found the smallest solution to this problem has over

200000 digits, and requires about half a mile of paper to write out in full. This is considerably more cattle than could fit in the observable universe. Better remembered for uniting science, mathematics and poetry is James Clark Maxwell. To scientists, he is the man who codified the many different laws of electricity and magnetism into one set of four simple equations: Maxwell’s Equations. He made many contributions to the theory of gases and heat, but is also well-known for taking to verse both to illustrate scientific principles and to mock other scientists and poets. His parody of Robert Burns’ ‘Comin’ through the rye’, called ‘Rigid Body Sings’ has achieved a level of general popularity in spite of its (lightly) scientific nature. But his ‘Tyndallic ode’ (that is, ode to Tyndall), manages to be both beautiful in words, and to mock the idea that words are the proper language for dealing with nature. The Tyndal in question favoured a more descriptive, experimental approach to physics, whereas Maxwell preferred a heavily mathematical approach. They were also on opposite sides of the evolution debate, with Maxwell supporting creationism in spite of its growing unpopularity in the scientific establishment. So how can poetry best express the relationship between the complex data of science and the simplicity of maths? Maxwell agreed with Dirac that sometimes beauty and simplicity can be more important than truth, in his case for explicitly religious reasons. Dirac, an outspoken atheist, made similar statements very much tongue-incheek: “God used beautiful mathematics in creating the world.” Maxwell’s sense of irony was reserved for his poems, but here the rich stock of metaphor his scientific knowledge had given him allowed him modes of expression other poets lacked. The wars between scientific truth, beauty and the pure categories of maths have rumbled on through the ages, as much as the wars between sincerity and irony. But stop to look for a moment, and we see that humans may claim to fight for one, but really embody different personages by turns. The conflicts are real, but very much internal – not science and arts, two cultures apart, but three camps that people constantly move between. Robin Lamboll is a PhD student in the Department of Physics.

Easter 2015


Michelle Cooper & Priyanka Iyer shine a light into its depths Postdoc research complete. Data analysed. Discussion developed and conclusions set. You commence your search for an appropriate journal in which to publish your key findings and you happen across ‘Advances in Aerospace Science and Technology.’ Sounds legit. You develop your paper based on the journal’s editorial guidelines and submit it with your open-access publication fee. Kick back and relax, you nailed it. A month passes and you schedule a meeting with your supervisor to share the good news – your manuscript has been accepted and will be published within four weeks. The publisher was so impressed; they didn’t even request any revisions. Your supervisor, however, is not so impressed. She proceeds to explain that it is highly likely that you have been hoodwinked. Three years worth of research published and paid for an open source ‘predatory journal’. Welcome to the parallel universe of scientific publishing, also known as the Dark Side. Predatory or fake journals compromise the peer review procedure and tarnish scientific publishing. They dilute the credibility of academic literature and, if you’re caught unawares, it may even have implications for your career. Such journals operate under the ‘author-pays’ model and can publish literature that has no scientific relevance. In some cases papers even include large amounts of plagiarized content. Young scholars can easily be drawn in and deceived, as most of these journals look extremely legitimate. Perusing their websites, some even have prominent board members or field experts, and in many cases these individuals are not aware that their identities and images are being flaunted on the journal’s website. One could argue that academia’s increasing demand for both frequent and high impact publications to secure postdocs and future professorships is facilitating a flourishing market for predatory journals. Meanwhile, the extortionists behind the facade profit from naive academics seeking to publish in journals with falsely generated high impact factors. In addition, predatory journals often promise rapid publication of the papers which can seem alluring to young or inexperienced researchers. Unfortunately, these publications are on the rise and flooding academia with substandard literature

Easter 2015

whilst blatantly promoting scientific misconduct. Needless to say, business in scientific publishing is booming. So how do you recognize these miscreants and where can you find more information? Jeffrey Beall, a librarian and associate professor, recognised the damage predatory publishers could do if allowed to progress unchecked. The result of his dedication to stamp out this activity and raise awareness to unsuspecting authors is Beall’s List. Six years since Beall developed his first list, the number of publishers and journals recorded on it has increased exponentially. As of January 2016 Beall has 923 suspected illegitimate publishers listed, up from 18 in 2011. Moreover, there are now 882 questionable standalone journal titles, 7 times the number first compiled in 2013. If all of this was not alarming enough, Beall’s 2016 site includes two new lists: Misleading Metrics (publications with counterfeit impact factors) and Hijacked Journals – where counterfeit websites highjack the identity of legitimate journals and solicit payments for open access. Publishing can be an arduous and frustrating process. To get by, keep calm, and use the force (or your common sense): • Know your publications. • Check submission dates in relation to dates of acceptance/publication. The peer review process takes time. Papers accepted and published in rapid timeframes should be seen as suspicious. • Read papers from the journal in question to ascertain scientific credibility. • Question impact factors and do further digging where necessary. • Discover more about predatory publishing from current literature. A helpful figure submitted by Mazières and Kohle (2005), who were accepted by a spam journal – pending payment...

Michelle Cooper and Priyanka Iyer are both MPhil students in Conservation Leadership.

Initiatives 29

ORAN MAGUIRE

Beware Publishing’s ‘Dark Side’


When it comes to employing creative language, humans have a massive advantage over machines.They are able to construct grammatical, sensible sentences; they have a wide range of things to talk about, and they experience joy and heartbreak.

“The loved sweet music moments on his chain, sat whilst upon pale stain and glenn, the castle whence my page and pain, no soft to see the history of men.”

Although we cannot bestow emotion on machines, we can train them to construct sensible sentences by emulating biological nervous systems. Colleagues and I employed recurrent neural networks that were trained on the entirety of Wikipedia, to learn a grammar of the English language. We then sample the running network in a process analogous to transcribing its dreams. This allows us to generate sequences of words which, although interesting, are not yet enough to be considered poetry.

Poetry often creates aesthetic value using rhyme, rhythm and alliteration. However in English, rhyme and rhythm cannot always be learned from examining the word itself.

The PhD work of Jack Hopkins

Programming Poetry

Instead, we need to examine the sounds of the language. By training a neural network on the phonetics of existing poetry (39,000 lines of poetry in sonnet form), and allowing the model to ‘babble’ the sounds that it learns, we can start to extract lines that bear many similarities to human poetry. Although not complex enough to capture the nuances of human creativity, the model learns all of the aforementioned poetic devices, and may lead us towards indistinguishable machine poetry in the future.

“Dost from their echoes in perpetual flow, white hands untorture sinks the numbered limbs. must even thee who hath no work follow, oh weary harp! her solace swims” PHOTOGRAPHS BY CHRISTIANA CARE, A. CURRELL AND LIO, ADAPTED BY DREAMSCOPE


PE

Toilet Papers

Andy Cheng pulls some stuff out from around the u-bend of science Pressure produced when penguins pooh calculations on avian defection

-

Polar Biology, 2003

Practice makes perfect: rectal foreign bodies Emergency Nurse, 2008

Chemical processes in the deep interior of uranus Nature, 2012

An in-depth analysis of a piece of shit PloS Neglected Diseases, 2012

Hung Jury: Testimonies of genital surgery by transsexual men Transgress Press, 2012

Factitious diarrhea: a case of watery deception J Pediatr Gastroenterol Nutr., 2001

Guess Who’s Not Coming to Dinner? Evaluating Online Restaurant Reservations for Disease Surveillance J Med Internet Res., 2014

From urethra with shove: Bladder foreign bodies. A case report and review J Am Geriatr Soc., 2006

Wax on, wax off: pubic hair grooming and potential complications J Am Geriatr Soc., 2006

The nature of navel fluff ALEX HAHN

Medical Hypotheses, 2009

Easter 2016

Pavilion 35


ALEX HAHN

Weird and Wonderful The life of an insect is certainly treacherous. Every movement and sound could announce the arrival of the next enemy ambush. Whilst they may not seem particular delicious to humans, for birds nothing is as welcoming as the sight of a fat, juicy larva. Considering their perpetual peril, it is unsurprising that bugs have developed an enormous range of tactics that help them avoid becoming the next meal. Indeed, insects offer some of the most extraordinary and exquisite examples of defence and deception in nature.

ALEX HAHN

Masquerading Moths

The Hawkmoth caterpillar sits on a branch. At three inches long it would make a delicious meal for any hungry bird, but it has a trick up its sleeve. It spies a threat and retracts its legs, extending the end of its body and dangling ominously from the end of a branch. Within seconds its aim becomes clear, as it transforms itself into an incredibly convincing, menacing looking snake. A terrifying sight for any predator! This deception would fool any passing bird, with the caterpillar imitating even the shape of the snake’s head, complete with glaring, forbidding eyes and a bright yellow underbelly. This otherwise extremely vulnerable caterpillar has duped its enemy and survives for another day. Imitations like this are called ‘masquerades’. Surpassing conventional colour matching camouflage, this disguise allows prey to remain hidden, even in plain sight. Entomology, the study of insects, boasts countless examples of masquerade and nature is littered with insects convincingly resembling sticks, leaves, or broken pieces of bark. Some even go a step further, performing dances to make themselves appear as if they are a leaf blowing in the wind. However, undoubtedly the most fantastic disguise is that of a moth whose appearance resembles a pile of bird poop. That really is one way to ensure no one goes near you! Cheating Flies

Now imagine you are Henry Walter Bates, a scientist of Darwin’s era, trekking around the Amazon carrying little more than raw curiosity and an exquisite collection of facial hair. Whilst 32 Weird and Wonderful

surveying the rainforest you notice the remarkably bright colours and patterns on the wings of butterflies. Many of these patterns seem to be repeated throughout several individuals. However, on closer inspection it becomes evident that these species, although appearing almost identical, are not even closely related. In 1861, Bates published the theory of Batesian mimicry. Instead of pretending to be inanimate objects, Batesian mimics bypass attack by imitating a dangerous species, despite being otherwise utterly defenceless. You may have been a victim of this deceit yourself, running screaming from your picnic because a wasp developed a liking for your sandwiches. Or was it a wasp? Many harmless hoverflies with their yellow and black banding make extremely convincing mimics of stinging wasps and bees.

Sexy Orchids

However, it turns out that insects are not all that smart themselves and many fall victim to some of nature’s most bizarre tricks. Tynnid wasps are deceived by an especially embarrassing form of mimicry. The females of these species are unfortunate in that they are flightless. Not great considering males can quickly buzz off to get food. To find a mate (and more importantly access to some food) females climb the nearest branch and release a pheromone, attracting a male who will feed her in return. The romance is short lived. The female lays her eggs in unsuspecting, helpless beetle larvae and flies away. Soon it is spring again and a male emerges from a hole in the ground after hibernation in search of his one true love. He thinks he spies a particularly attractive female on a nearby plant. Sensing her pheromones he readily investigates. However, this match is far from perfect. In fact, she seems entirely inanimate. Bizarrely, it turns out she is not even a wasp, just an extremely crafty orchid plant! As a method of pollination, many orchids evolve to visually resemble female wasps, and even emit identical pheromones. When the duped, confused and broken hearted male moves on he may be tricked again, landing on another orchid, and transporting pollen stuck on his abdomen. A tremendous tale of trickery and deceit! je

Illustrations by www.alexhahnillustrator.com

Easter 2016

r It

Bu Do

Ho

Th

Di re TT


Join Write for us! Feature articles for the magazine can be on any scientific topic and should be aimed at a wide audience, normally 1000-1200 words. We also have shorter news and reviews articles. Please email managing-editor@bluesci.co.uk with your articles and ideas! For their generous contributions, BlueSci would like to thank: Churchill College Jesus College POWDER-FREE NITRILE G

If your institution would like to support BlueSci, please contact enquiries@bluesci.co.uk

roll me a six...

and again...and again...and again...

It would be easier if you had a weighted dice, wouldn’t it? But is that enough? What surface would you roll it on? Do you give it any spin? How fast? How would you win every time? There is no answer at the back of the book. Discuss your approach to this and the real problems you could be solving at TTP every day. explore@ttp.com

Apply yourself. Explore TTP. www.ttp.com


Eager to move on up in your career? er Expo e r a C s b o j e The Natur T CAREER S E G R A L E H T IS FERENCE N O C D N A R I A F SIVELY ON U L C X E D E S U FOC C WORLD. I F I T N E I C S E TH REST EXPO: A E N R U O Y D FIN rexpo obs naturej

e

e .com/car

Naturejobs is the global jobs board and career resource for scientists. We can help you throughout your job search and your career: Find a job Search jobs, set up job alerts and research employers or search for jobs on-the-go with our free mobile app. Help employers to find you Upload your CV and make your profile searchable to employers. Meet employers in person Attend the Naturejobs Career Expo for invaluable career advice and to meet recruiters. View science careers advice Keep up with the latest careers articles, interviews and more via our news and resources section or by subscribing to our newsletter. Ask us questions Search for “Naturejobs� on your preferred social media platform or contact us via the Naturejobs blog.

www.naturejobs.com

Follow us on:


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.