theGIST Issue 8

Page 1



2017 has been another successful year at theGIST, with the magazine continuing to grow beyond our wildest expectations. We now have over 350 contributors! This means that theGIST isn’t just one of the UK’s original student science magazines, it has also cemented its place among the biggest and the best. Despite our rapid growth and recent achievements, we haven’t lost sight of our main goal. We continue to accurately and passionately communicate science and technology to the students of the universities of Glasgow and Strathclyde, and beyond. We hope that our articles help people to make informed decisions - whether it be about the latest anti-ageing treatments, climate change, or the potential threats associated with artificial intelligence and big data. As scientists, we have a responsibility to ensure that our work is accessible and understandable, and in this age of ‘alternative facts’, our role as communicators is more important than ever. Lately, it seems like we have been questioning every report, result, and statistic churned out by the mass media - GISTers, we must work together to overcome this slough of misinformation! In this issue, theGIST tackle some questions of our own. Firstly, just how do you make a moon? Scott Jess gives us a whistlestop tour of our satellite’s creation. On page 18, James Burgon makes us wonder how we would feel about never

SCIENCE 04Glasgow Science Update

owning anything again, and makes us pause to consider the consequences of our consumerism. He looks at the difficulties in driving global policy change, and reminds us that together, we have the power to impact these decisions. On a lighter note, why do our earphones always end in a tangled mess? Richard Murchie solves that mystery with his guide to knot theory. We hope that this issue captures your imagination. We have a lot of people to thank, without whom theGIST would be nothing. Massive thanks go to our previous editors, Aidan and Alisha - we hope to continue your good work. We’d also like to thank the board for the blood, sweat and tears that went into printing this magazine. Thank you to all of our wonderful contributors who have written, illustrated, filmed, and podcasted for us this year. Last but definitely not least, we want to thank you for picking up this copy of theGIST. We hope you love it as much as we do! The future of theGIST will be in the hands of the new board members elected for 2018, we’re sure there will be plenty of exciting changes ahead and we hope that we will still be at the helm. With much love, Gabriela De Sousa & Katrina Wesencraft

SCIENCE 18A Loopy Way to Save the World

08Digital Pills: Big Data or Big Brother? 10LAWless: Should We Ban Killer Robots?

SCIENCE & MATHS 14 Mathematical Knots: It's Not What You'd Expect 16How To Make A Moon Editors-in-Chief: Gabriela De Sousa and Katrina Wesencraft Submission Editors: Derek Connor and Costreie Miruna Head of Copy-Editing: Kirsten Munro

Layout: Gabriela De Sousa and Sara Jackson Art: Sara Jackson, Gabriela De Sousa, Dan Templeton, and Cully Robertson (Cullor Illustration)

SCIENCE 20Young Blood Defies Aging 22The Fast Way To Cure Diabetes 24When Brain Glue Fights Back 26A Message From Your Genes: Quit Smoking Now

28Modern Genomics, Ancient DNA Glasgow's largest science magazine


The LIGO and Virgo collaborations have been awarded "Breakthrough of the Year 2017" by Physics World and Science Magazine, for their detection of two neutron stars merging. Glasgow's Institute of Gravitational The Research were joint recipients of Engineering and various accolades for the detection. Physical Sciences Research This breakthrough was the first Council has announced £14 event of its kind to be million funding for research confirmed independently projects that “take new approaches via electromagnetic to data science", led by four UK means.

universities including the University of Glasgow. Researchers will collaborate with industry and the public sector to address challenges in various fields including health, security and the environment.

European Southern Observatory via Flickr

In January, First Minister Nicola Sturgeon visited the University of Glasgow's Imaging Centre of Excellence (ICE) at the Queen Elizabeth University Hospital. Sturgeon praised the centre, stating "The ICE is pioneering the use of precision medicine – helping develop new treatments for patients facing serious conditions such as strokes, brain tumours, multiple sclerosis and dementia."

Researchers in the Institute of Sensors Signals and Communications at the University of Strathclyde are using machine learning techniques to develop an algorithm that is able to automatically detect and evaluate problems with underwater oil and gas pipelines. Currently, video footage is manually examined for leaks and potential dangers, an approach which is both slow and susceptible to human error. This algorithm has been shown to improve the speed and accuracy of hazard identification, and can maintain high performance even with poor quality footage.


A report investigating the poor development of UK wave power technology over the past two decades has been released by the University of Strathclyde and Imperial College London. Contributing factors were olicy shortcomings, collaboration and a failure to grasp the magnitude of the challenge of harnessing wave energy. However, the researchers remain optimistic that wave power may become an established alternative energy source in the future and have made policy recommendations based on their findings.

r ia Fli ck

Last year, the University of Glasgow Particle Physics researchers played a leading role in the discovery of a new composite particle at the Large Hadron Collider beauty experiment (LHCb). Situated at CERN, LHCb investigates differences between matter and antimatter. The particle is a type of baryon called Ξcc++ and CERN physicists have been awaiting its discovery for a long time; it adds credence to the Standard Model of particle physics that predicted its existence.

WI DE HA US v

Oli ver

Cla rke v

ia Fli ck

r

The Associate Director of University of Glasgow’s Institute for Gravitational Research, Professor James Hough, has been awarded the 2017 Gold Medal of the Royal Astronomical Society "for his seminal contribution to the science of gravitational waves". Professor Hough's work is considered to have been integral to the recent discovery. The University of Edinburgh also chose to recognise Professor Hough for his achievements, awarding him an honorary D.Sc. degree in December last year.


ia Fli ck

Jo Me v

For the first time, a tissue engineered bone graft has been generated using technology originally developed to detect gravitational waves. Scientists from the Universities of Glasgow, Strathclyde, the West of Scotland and Galway have managed to grow three-dimensional mineralised A two year long bone samples in the laboratory using University of Glasgow advances in ‘nanokicking’; a method research project on the of delivering precise mechanical vulnerability of Scotland's coastline to climate change will commence this stimulation to cells in year. A fifth of Scotland's coast is culture.

r

Health Data Research UK has awarded ÂŁ30 million to a large partnership of UK universities, including Glasgow and Strathclyde, supporting healthcare collaborations using data science methods. The researchers will also work in parallel with the NHS, harnessing vast amounts of patient data in order to make improvements in areas including risk assessment, diagnostics, and treatment plans.

deemed to be at risk of erosion due to rising sea levels. The project, funded by the Centre for Expertise in Water, will map increasing erosion rates and assess their impact on flooding risks. The researchers aim to make forecasts about the damage we can expect to see, as well as develop strategies to manage it. Curated by Anna Duncan

Nick Hoffman via Flickr



T

here’s something spooky about a tablet that knows it’s been swallowed. However, last month the FDA approved just that1. The new pill in question, Abilify MyCite, has a tiny sensor inside that can digitally track whether or not it has been ingested, transmitting this message to a patch worn by the patient. From this, it can be connected to an app which monitors drug use. While this sounds like something straight out of ‘Black Mirror’, there is some logic behind the approval, and that’s drug adherence.

Adherence, or taking your medication as prescribed, is something people are not always good at. In fact, a report by the World Health Organisation found that the average person only took half of their prescribed drugs for chronic diseases2. Reasons for this are pretty broad: you might have a complicated regimen that is hard to stick to – if you’ve ever had antibiotics that need to be taken ‘four times a day, at least one hour after and two hours before eating’ then you know what I mean; if you live in a country where you don’t have access to universal free healthcare then you might not be in a financial position to collect meds as often as you should; or you might suffer from side-effects – even mild side-effects can be a burden so you

might skip doses in the hopes you’ll reduce them; you might not feel ill. That one sounds a little paradoxical – why take drugs if you're not ill? If you have high blood pressure or high cholesterol this would normally be picked up in clinical tests and drugs would be prescribed to reduce your risk of having a heart attack or stroke, but it’s not something that you can feel or will be aware of everyday. These are just some of the reasons that people can’t or won’t take their medication as prescribed, and it is by no means an exhaustive list. So how does tracking it help? Firstly, it helps the patient. Via the app, people who take Abilify MyCite have the ability to keep tabs on their medication usage in realtime. This means when it gets to midday and you wonder ‘did I leave my hair straighteners on this morning?’ and, ‘did I take my medication?’ your app can help you with 50% of your problems. Sometimes you genuinely can’t remember. You’re human and imperfect and even the most organised among us forget things occasionally. With self-aware drugs and fancy phone apps, that could be a thing of the past.

Tracking in this way can also help doctors. When a patient insists that

they’ve taken their medication and it’s simply not working, their doctor will now be able to check if they have actually stuck to their drug regimen before altering the dosage. This is really useful – people don’t lie to their doctors for malicious reasons, usually they just want to impress and earn a gold sticker. There is a phenomenon known as “white coat adherence” which is basically when people start feeling guilty in the days leading up to an appointment and begin sticking to their prescription properly. After a few days of diligently taking your drugs as prescribed, when your doctor asks how you’ve been doing with your medication you don’t feel like you’re lying when you say it’s going great. It also skews direct tests that measure the amount of a drug present in the blood or urine, as the results won’t be representative. Increasing a dose unnecessarily can have serious disadvantages. It increases costs to the already stretched health service and, if for some reason the patient then follows their prescription more closely than previously, they could risk taking too much. This is especially pertinent in the case of drugs with a narrow therapeutic range, or more simply, drugs that have a small window between the dose needed to be effective and the dose which could lead to complications or toxicity. Warfarin, a drug used to thin the blood in patients who are at risk of a heart attack or stroke, is a good example of a drug which can cause serious complications, in this case bleeding if too much is taken. The other group who could stand to benefit from digital pills are researchers. Adherence is a


notoriously tricky thing to study. Self-reporting from patients might not be all that accurate. Not just because of the ‘impress your doctor complex’ I mentioned previously but also due to genuine human forgetfulness. A more rudimentary method of monitoring drug adherence is pill counting where people are asked to bring back any remaining drugs to their doctor to be counted. Research has found that this is hands-down the worst method of measuring adherence, due to people pill-dumping or stockpiling their meds. Removing patients from the equation, many researchers rely on databases of prescribing records to paint a picture of drug adherence. In short, researchers access records generated by the pharmacy when drugs are given to patients, and analyse repeated prescriptions over time to model how good adherence is likely to be. For example, if you have a drug that needs to be taken once a day, and you collect 28 tablets, you should be collecting your next prescription about a month later - if patients collect their prescription outside this timeframe, it is likely their adherence is poor. There are still many flaws with this method, particularly as it’s still impossible to tell if drugs that are collected on time have been taken appropriately.

taking drugs for a specific illness? Or perhaps the particular drug prescribed could make a difference? If we can work out what sort of things predict adherence, we can start to tailor treatment to support individuals.

However, as with any new drug, there are issues. Abilify MyCite is a drug prescribed for schizophrenia, which is a serious psychological condition and can be associated with episodes of paranoia. The FDA guidelines do say that doctors should only use the new drug if patients are ‘willing and capable of using the app’ and that patients can block their doctor from seeing it at any time. There is a danger that patients may feel coerced into taking it, or that those who refuse may become suspicious of their regular pills. If people begin to worry that they are being tracked without agreeing to it, it may lead back to square one:

further non-adherence. Any of us could be nervous of the level of insight this could have into our daily routine if the data isn’t secured or used appropriately. Introducing this to patients who are more prone to episodes of paranoia could be a dangerous gamble. As modern medicine moves toward an age of big data, the security of patient information becomes increasingly important. In the US, patients who rely on health insurance could risk having their adherence data used against them if it were accessed by insurance companies. Under the new data protection legislation, GDPR, coming into effect in the EU early next year, maintenance of this sort of data will be more stringent than ever, meaning researchers and doctors will have to take extra precautions to protect it and ensure it is used in an ethical way. The WHO report1 suggested that improving adherence may be more important for public health than any major pharmacological breakthrough. If we already have drugs that work, then taking them properly might be a huge step towards improving health. If digital drugs can help solve that then maybe it’s worth risking a date with Big Brother.

This article was written by Kirstin Leslie. It was specialist edited by Derek Connor and copy-edited by Katrina Wesencraft. Artwork by Sara Jackson. It’s safe to say that a pill that knows it’s been taken would be a big improvement. No doubt, it would produce huge amounts of data but if used well it could be really enlightening. Researchers could look at adherence in a time-sensitive way, which would be far more informative than anything available to date. It could help to identify predictors – are people less likely to take their medication if they are in a certain age group? Or if they are

References: [1] https://www.fda.gov/NewsEvents/N ewsroom/PressAnnouncements/uc m584933.htm [2] WHO: Adherence to Long-Term Therapies: Evidence for Action. 2003


Peter Ladd via Flickr

With several countries investing in the development of lethal autonomous weapons (LAWs), many believe that we are in the midst of an artificial intelligence arms race. This technology has the power to transform modern warfare, but should we allow the machines to determine (and attack) their own targets? And who is really responsible when they pull the trigger?

routine easier, your work more productive, and can even unlock your phone using your face. Many of us already have virtual assistants (think Siri, Cortana or Alexa), and it isn’t a massive leap to envision humans will live alongside intelligent machines within our lifetime. For decades, the idea of a future with robotic servants has permeated popular culture, but human control is usually the key to this fantasy. For some, a robot uprising has become a genuine fear.

Autonomous Machines

T

he term 'artificial intelligence' was first coined in 1955 by Professor John McCarthy prior to the famous Dartmouth conference of 1956. The task of developing software that could mimic human behaviour was more complicated than he first imagined, and progress in the field was slow due to the laborious programming required. In the last ten years we have seen an AI explosion, with advances in machine learning techniques and huge improvements in computing power prompting massive investment from big tech firms. Today, AI is everywhere and affects everything from how you shop online to how you receive medical treatment. It can make your daily

A fundamental aspect of AI is that machines possess the ability to make their own decisions, however the training of AI has traditionally been carried out under close human supervision. Algorithms are trained with carefully selected training data; they make decisions more quickly and with fewer errors than we can, but essentially the data you provide ensures that the machines make the decisions that you want them to make. The application of ‘deep learning’ may change that. Since the 1950’s, programmers have attempted to simulate the human brain using a simplified network of virtual neurons. However, it is only recent advances in computer power that

have enabled machines to train themselves using complex neural networks without human supervision1. Neural networks are still not reaching anywhere near the complexity of the human brain but, despite this, many experts believe that this form of deep learning will be the key to developing machines that think just like humans2. Google’s AI system AlphaGo recently made headlines when it defeated Ke Jie, the Go world champion. This ancient strategy game is believed to be the most complex game ever devised. For comparison, when playing a game of chess you will typically have 35 moves to choose from per turn - in Go this number is almost 200. This achievement represents a significant leap forward as, in the ‘90s, AI experts predicted that it could take at least 100 years until a computer could beat a human at Go3. With AlphaGo, Google engineers have used neural networks to create the first AI displaying something akin to intuition. However, the feature that roboticists are trying to capture is autonomy the ability to make an informed decision, free from external pressures or influence - although as it stands, even autonomous robots are only capable of making simple decisions within a controlled environment.


While AI can now outperform humans in quantitative data analysis and repetitive actions, we still have the advantage when it comes to judgement and reasoning. Science fiction has taught us to fear a robot uprising, often with humanoid robots that walk, talk, and think just like us. What if they refuse to obey orders? This is particularly concerning if those robots are armed and dangerous.

Killer Robots In August 2017, the founders of 116 robotics and AI companies, most notably Elon Musk (Tesla, SpaceX and OpenAI) and Mustafa Suleyman (Google DeepMind), signed an open letter calling for the United Nations to ban military robots known as lethal autonomous weapons (LAWs). As it stands, there is still no definition of fully autonomous weapon that is internationally agreed upon, however the International Committee of the Red Cross stipulate that LAWs are machines with the ability to acquire, track, select and attack targets independently of human influence. Also calling for a total ban on LAWs is The Campaign to Stop Killer Robots, an international advocacy group formed by multiple NGOs, who believe that allowing machines to make life and death decisions crosses a fundamental moral line. According to their website, 22 countries already support an international ban and the list is growing4.

Despite growing concerns, the US, Israeli, Chinese and Russian governments are all ploughing money into the development of LAWs. Lethal autonomous weapons may sound like science fiction but the desire to create weapons that detonate independently of human control is far from new. Since the 13th century, landmines have been used to destroy enemy combatants, and while they are unsupervised, they aren’t autonomous by modern standards. Landmines detonate indiscriminately (typically in response to pressure), rather than as an active decision made by the device. Developing LAWs for offensive operations is desirable as

governments look to increase their military capabilities and reduce the risk to personnel. However, campaigners are worried that this potential risk reduction will lower the threshold for entering into armed conflict. There is also concern that when fully autonomous robots are placed in a battle environment where they are required to adapt to sudden and unexpected changes, their behavioural response may be highly unpredictable. Current autonomous weapons tend to be used for defensive, rather than offensive purposes, and are limited to attacking military hardware rather than personnel. The Israeli Harpy is one such lethal autonomous weapon, armed with a high-explosive warhead. Marketed as a ‘fire and forget’ autonomous weapon5, once launched it loiters around a target area then identifies and attacks enemy radar systems without human input (however, its attack mission can be overridden). It is believed that these LAWs, known as loitering munitions, are already being used by at least 14 different nations. NATO suspect that drones capable of functioning without any human supervision are not currently in operation due to political sensitivity rather than any technological limitations6. The US is already developing autonomous drones that take orders from other drones. Department of Defence documents reveal that this ‘swarm system’ of nano drones is called PERDIX. The drones can be released from, and can act as an extension of, a manned aircraft but they can also function with a high degree of autonomy7. These autonomous weapons have learned the desired response to a series of scenarios, but what if they continued to learn? Perhaps one day, advances in machine learning techniques will lead to the development of weapons that are capable of adapting their behaviour. With all the political caginess, it’s difficult to say for certain that this technology isn’t already in development. Greg Allen from the Center for New American Security thinks that a full ban on LAWs is unlikely as the advantages gained by developing these weapons are too tempting. Yale Law School’s Rebecca Crootof has stated that she believes rather than calling for a total ban, it would be more productive to campaign for new regulatory legislation. The Geneva Convention currently restricts the actions of human soldiers, perhaps this should be adapted to apply to robot soldiers too.

An Ethical Minefield Many have expressed concern that, as robots become increasingly more human-like in their decisionmaking, their decisions must be based on human morals and laws. It has been 75 years since Isaac Asimov first wrote of a future with android servants, and he devised three rules which still play a key role in today’s conversation surrounding the ethics of creating intelligent machines: 1. A robot may not injure a human being or, through inaction, allow a human being to come to harm 2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law 3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws Asimov correctly predicted the development of autonomous robots, and while robots making the conscious decision to obey humanimposed laws may be far-fetched, experts have called for the laws to be followed by programmers. The impending development of LAWs is causing machine ethicists to reconsider the First Law, as Asimov’s principles do not take into account the possibility that we would develop robots specifically to injure and kill other humans. These three rules have formed the basis of the principles of robotics published by the Engineering and Physical Sciences Research Council (EPSRC). Their updated version of Asimov’s laws redirects the responsibility from robots to roboticists8. The most notable amendment is to the first law, which conveniently states that robots should not be designed to kill humans ‘except in the interests of national security’. It is worth pointing out the UK government has previously stated that they are opposed to a ban on LAWs. However, following the open letter from Musk and co., the Ministry of Defence has clarified that any autonomous weapons developed by the UK will always operate under human supervision. I don’t find this particularly reassuring.

Creating ethical robots is just as hard as you would imagine, and creating a moral code requires the


programmer to consider countless exceptions and contradictions to each rule. Even Asimov’s relatively simple laws illustrate this problem. Morality is also highly subjective, and humans probably aren’t the best moral teachers. If the training data supplied for machine learning is biased, then you will get a biased robot. This is particularly concerning when considering LAWs, as it will be possible for governments to develop weapons that are inherently racist (either by accident or on purpose). Perhaps it is not a robot rebellion that we fear, but what governments and individuals will be able to achieve by abusing this technology. In November, the Russian government made it clear that they would ignore a UN ban on LAWs under the pretence that it would harm the development of civilian AI technologies.

The Greater of Two Evils As machines become faster, stronger, and smarter than we are, the need for control becomes more critical. However, some experts believe that when it comes to LAWs, we shouldn’t waste time tackling these particular ethical issues. The current debate around banning LAWs often assumes that such weapons will be operating free from oversight and that humans will be absolved from any blame for their actions. Due to international law and restrictions on appropriate military

force, many feel it is unlikely that we will ever see robots fighting in conflicts without close human supervision. Some ethicists are concerned that the language being used in the debate confuses the features of the technology with potential consequences of its misuse. It is unlikely that we will find ourselves in a scenario where humans are absolved of blame - LAWs will have programmers, manufacturers and overseers. The EPSRC principles attempt to highlight this by stressing that robots are manufactured products, and that there must be a designated person legally responsible for their actions. Though this is assuming that in the future, robots will still be programmed by humans9. Autonomous drones can already follow and take orders from other drones, AI can program superior AI, and robots can create their own languages. It’s beginning to look like the robot uprising could occur sooner than we think. Some seek comfort in the belief that robots will follow our instructions. Others believe that legislation, bans, and limits on autonomy are the way forward. But is a robot rebellion really the most pressing threat? Perhaps we should be more concerned about governments ignoring international law and using these robots as weapons of terror. It is easy to imagine that in this scenario, the

people responsible may wash their hands of any wrongdoing and blame the robots. Or hackers. Those who support a total ban on the development of LAWs must hope that it will not be possible to abuse this technology if it does not exist in the first place. However, it’s possible that we have already let the genie out of the bottle. It is very difficult to ban the development of something that has already been developed.

This article was written by Katrina Wesencraft, and specialist and copy-edited by Derek Connor.

[1]https://www.technologyreview.com/s/513 696/deep-learning/ [2]playground.tensorflow.org [3]http://www.nytimes.com/1997/07/29/sc ience/to-test-a-powerful-computer-play-anancient-game.html?pagewanted=all [4]https://www.stopkillerrobots.org/2017/11 /gge/ [5]http://www.iai.co.il/2013/36694-16153en/Business_Areas_Land.aspx [6]https://www.nato.int/docu/Review/2017/ Also-in-2017/autonomous-military-drones-nolonger-science-fiction/EN/index.htm [7]https://www.youtube.com/watch?v=ndFK UKHfuM0 [8]https://www.epsrc.ac.uk/research/ourport folio/themes/engineering/activities/principle sofrobotics/ [9]https://futurism.com/googles-new-ai-isbetter-at-creating-ai-than-the-companysengineers/



E

ver wonder why your earphones tangle to the point where it seems as if they’re trying to stop you from listening to music? Most people would see this as a frustrating test of dexterity, but for the mathematician it’s the real-life manifestation of a field that is still the focal point of research to this day: knot theory. The fascination with knot theory kickstarted in the 1860s, when the predominant ideology was that a mysterious substance called ‘ether’ permeated the entire universe. Scottish physicist Lord Kelvin hypothesised that each element is just a distinct knot in the fabric of the ether, and this sparked a flurry of tabulation and research into the field from physicists and chemists alike. When this hypothesis was found out to be false — as the true model of the atom was discovered — interest from physicists and chemists died off, but mathematicians continued their investigations just for the sake of discovery1. This echoes Einstein’s use of noneuclidean geometry (geometry of curved surfaces) for his theory of relativity, as the mathematics were developed prior to any apparent real world use. Skip forward to the 1980s and knot theory had found an application: biochemists discovered

that DNA unknots and knots itself using tailor-made enzymes. Understanding this complex process can further our knowledge in gene mutation. Cryptography, statistics and quantum computing all utilise aspects from knot theory, with developments in the field often occurring as a by-product from research into quantum physics. More recently, the interest from chemists has returned, as knotted molecules hold great promise for new materials with potentially far superior properties. Chemists work with the same atoms and molecules that comprise knottable DNA, but it is still a formidable challenge to create knots on the atomic scale. These examples demonstrate that mathematics is a universal language of the physical sciences. To be clear, there is a difference between the knot we’re all used to — the ominous twist in a string or wire — and the mathematical knot. Think of the mathematical knot as a piece of string (with no thickness) that has had its two ends glued together. The simple loop is called the unknot or the trivial knot, and the trefoil knot is the simplest non-trivial knot - it’s the classic overhand knot with its ends glued together. We all know from experience that the same problem can come under many different guises, and the same

applies to knots. Consider the unknot: versions of it can look like a convoluted tangle but it can always untie back to the standard version of the unknot, even if it takes a lot of manipulation to achieve this. We call each version of a knot a ‘projection’ of the knot. Its projection may vary in the number of crossing points, for example, but it will always be the same knot. This raises an issue: how can you be sure any given knot isn’t the unknot in disguise? We could mangle a knot for hours without any hope of untangling it, but maybe it’s a case of a flaw in our method. A proof is required to show that there are non-trivial knots and we shall elaborate on this proof later. The three Reidemeister moves (R1-R3) can be used sequentially to manipulate knots. They only change the projection of the knot, never changing the distinct type of the knot, as cutting or removing a crossing is forbidden.


not affect the tricolorability of the knot, so any projection can determine whether a particular knot is tricolorable or not. With the trefoil knot being tricolorable and the unknot being non-tricolorable this proves that there is, in fact, a knot that is non-trivial. Sadly, the figureeight knot is not tricolorable even though there are other invariants that prove that it is a distinct knot. Once again, we need to look for another invariant that can fully unravel the mysteries of knots2.

The main problem surrounding knot theory is how to systematically distinguish one knot from another. To attempt to distinguish knots we need knot invariants, which is a certain quality of the knot that remains constant regardless of what projection it is in. We’ll see that some invariants are only useful to a degree, whereas the more advanced invariants can cover a vast portion of the knots we know of. As it stands, nobody has invented an invariant that can weave all known knots into a single underlying construct, even though recent developments in the field have attempted to solve this problem. A basic invariant is the crossing number, the minimum number of crossings that any projection of a particular knot can have. The trefoil knot from some viewpoints can have more than 3 crossings, but there isn’t a viewpoint (projection) where there are less than 3 crossings. This isn’t the best way to distinguish knots from each other as it can be hard to determine a knot’s true crossing number. We use ‘tricolorability’ to prove the existence of a non-trivial knot. Tricolorability focuses on the strands of the knot — a strand being a piece of the knot in the projection that goes from one undercrossing to another with only overcrossings in between. A projection is tricolorable if there are at least two colours used in the diagram, and at each crossing there is the meeting of three different colours or only one colour. If there’s a crossing where only two colours meet, then the knot isn’t tricolorable. Reidemeister moves do

Major developments in invariants often came from people working in quantum physics. They produced the Jones polynomial (an arithmetical expression), which is a very useful invariant that uses an algorithm to build a distinct polynomial for each knot procedurally from each crossing. The Jones polynomial isn’t the complete answer, since sometimes a distinct polynomial will lead back to two unrelated knots. The idea of a knot polynomial has been improved upon so that it can encompass more of knot theory, but it is hoped that a radically different approach called categorification will advance us further in this field. Conceptually, categorification is a nightmare to understand: it works contrary to the normal logic of mathematics. It tries to provide a richer structure to understand the knots rather than use the simplified abstractions of the real world that we usually encounter in mathematics. Once again, though, it isn’t the final solution. Even if all known invariants for two very complex knots are the same we still can’t be completely sure that they’re the same knot; a grander, further reaching structure is required to decide if a knot is uniquely distinct3.

they can work with. The hope is that this knotted molecule, when weaved together like minuscule chainmail, will have properties unmatched by any contemporary material. Another recent development is the creation of a knot made of a fluid. Using a knotshaped plastic mould dragged through water filled with microscopic bubbles, the vortex left by the mould enticed the bubbles into a knot-shaped flow4. The researchers hope that this will provide a way for us to study superfluids — fluids with strange quantum properties that are notoriously hard to image if they have knots. Still, the fields utilising knot theory are yet to realise its full potential. Our understanding has vastly improved since the days of the ‘knots in the ether’ hypothesis, and it shows that seemingly purposeless mathematics can later find its use in an eclectic range of applications. For those of you still wondering why your earphones consistently find themselves in a convoluted bundle, here is your answer: the longer the wire, the higher the probability it will tangle. This is because there is only one way it can be straight but a mind-bending number of ways it can knot, with movement in your pocket and time allowing these other forms to occur. I took so long to tie up this loose end because it truly is easy to get entangled in distractions.

This article was written by Richard Murchie. It was specialist edited by Madeline Pritchard and copy-edited by Katrina Wesencraft References:

Even though there are many open questions that plague the theory, it has still facilitated some very advanced recent scientific developments. One of the most recent innovations to utilise knot theory was carried out by chemists who synthesised an 8-crossing knot. Previously, chemists were only able to create the trefoil knot (synthesised in 1989) and a 5crossing knot (synthesised in 2011). It is one of the smallest and tightest knots ever created, utilising only 192 atoms4. At this scale knotting becomes a serious challenge due to the very limited number of entities

[1]https://plus.maths.org/content/why-knotknots-molecules-and-stick-numbers [2]http://web.math.ucsb.edu/~padraic/ucsb_ 2014_15/ccs_problem_solving_w2015/Tricol orability.pdf [3]http://omlette.irev.net/files/print/knots.ns .pdf [4]https://www.newscientist.com/article/211 7870-molecules-tied-into-beautiful-octofoilknot-for-first-time/


A

s the asteroid hurtles toward Earth at the end of the movie Armageddon when Bruce Willis makes the ultimate sacrifice for mankind, who knew that the writers of the film were in fact emulating real events. If you go back through history far enough to the very inception of our planet, the events of the infamous 1998 movie classic would seem trivial compared to the immense collision of Earth and Theia - a neighbouring planet over 6000 kilometres in diameter - from which our moon was created. The story goes that a solitary early Earth orbited the sun happily until one day, 4.5 billion years ago, Theia swooped out of nowhere and collided with Earth, spraying debris out into the surrounding area; debris that later consolidated and formed our early moon. The impact may have been a glancing blow, however recent work suggests it was more likely to be head-on, as the chemistry of the moon and Earth is almost identical, implying a complete merging of both early planets1. This also explains both Earth's slight tilt and why its core is larger than expected for a planetary object of its size; the head-on collision may have bumped the axis of Earth, creating our seasons, and Theia's core may have mixed with Earth's following the impact. Though it sounds like the plot to a disaster movie, Theia’s impact with Earth is the leading theory. How the moon truly formed is still the centre of many an academic debate: Was it a passing asteroid caught up in earth’s gravity? Was it formed in unison from the same cosmic dust cloud? Did a rapidly spinning earth launch material into space that later became an orbiting satellite (an idea popularised by Charles Darwin’s son George). The further exploration of the moon by numerous upcoming launches (16 planned before 2020; hoping to obtain soil samples, water

deposits and surface maps) may bring more evidence of the moon's formation to light, potentially allowing us to establish whether Theia did in fact exist. In addition to the work on the formation of our moon, new work has now also begun to unravel the mystery behind Mars’ two moons and their differing orbits. Phobos and Deimos, named after the Greek gods of fear and horror, are two of the smallest objects in our solar system. They orbit Mars at two very different distances (~9,000 kilometers and ~23,000 kilometers respectively), raising serious questions about their formation. Their size, density and composition are nearly identical to that of certain types of asteroid. This, plus the fact that Mars is located close to the asteroid belt, makes it probable that the moons are captured asteroids. However, several complex mechanisms must operate in unison to catch an asteroid and keep it in orbit aligned along an equator, suggesting the capture of two asteroids by the gravity of Mars is highly speculative. Recent work has supplied an alternative theory, which suggests that both moons are the result of yet another planetary impact2. The theory states that Mars was also struck by an early planet, creating a vast disc of debris and material, much like the rings of Saturn. From these rings, one large moon and many other smaller objects formed before the former fell back to Mars, pulled in by the gravitational force of the planet. Evidence of this collision can be seen on the surface of the planet today as the extensive ‘Borealis Basin’, a vast depression in Mars’ northern hemisphere that covers up to a quarter of the planet’s surface. With the larger moon out of the way, Phobos and Deimos were all that was left orbiting Mars. This new work not only furthers

our understanding of early planetary collisions, but also builds upon our knowledge of how a planet's gravity can affect the orbit of moons. It shows how the distance between a planet and its moon can either pull it inwards (Roche limit) or let it spin off into space (synchronous limit), meaning a moon must find the perfect balance between the two limits to form or face utter destruction. “[This work] shows a new possible outcome of the Giant [Impact] Scenario,” says Pascal Rosenblatt of the Royal Observatory of Belgium and lead author of the work. “The synchronous limit is well above the Roche limit [on Mars], preventing [it from maintaining] a massive moon in orbit.” These conclusions mean only smaller moons could survive in the orbit around Mars while much larger moons would always be drawn inwards. Work like this continues to help us understand our solar system. “Their findings certainly help to explain some of the peculiarities of the Martian satellites,” says Martin Lee, a professor at the University of Glasgow, “but also provides motivation and scientific justification for further exploration of Phobos and Deimos.” And he is quite right; looking up at the stars and planets is intriguing to all of us, and with more work outlining the incredible circumstances that created them, who knows what continued investigation will uncover. This article written by Scott Jess. It was specialist edited by Lisa Millar and copy-edited by Matthew Hayhow. Artwork by Cully Robertson (Cullor Illustration). References [1]http://science.sciencemag.org/content/ea rly/2012/10/16/science.1226073 2]http://www.nature.com/ngeo/journal/v9/ n8/full/ngeo2742.html



G

oing round in circles is rarely considered efficient, but it could be just what our economy needs. It is clear that our ‘disposable culture’ is unsustainable: the global middleclass is expanding and our hunger for new products means we are rapidly burning through finite natural resources. This, in turn, is creating insecurities in global material markets and causing environmental damage through resource extraction, landfilling, and pollution. What can we do to change this? Well, there is this one loopy idea that may help: it's called the circular economy.

In a circular economy, restoration and recovery processes are used to increase the lifespan of products, components and materials. This goes beyond current recycling methods, as the highest quality and value is extracted at each stage of a product’s life cycle: an engineered part is more valuable than the raw materials that

comprise it. For example, rather than disposing of an old or broken smartphone, a circular phone would be designed for easy repair, component reuse, and material extraction at end-of-life. Biological materials, like food and agricultural waste, can also be looped through a value chain. For example, a process called anaerobic digestion uses microorganisms to break down biowaste in the absence of oxygen. This produces biogas, a renewable fuel, and a nutrient rich slurry, which can be ‘mined’ for chemicals or used as a crop fertiliser. The theory is simple: once a resource enters a supply chain, it loops back through it indefinitely. However, to achieve this kind of system, resource use and economic growth must be decoupled, fundamentally changing how people consume products. So, how would you feel about never owning anything ever again? If a company prioritises consumer access to a product over ownership, they can retain control over the resources it contains. For example, by selling printing services not hardware, Xerox can reuse or recycle more than 90% of their equipment1. Similarly, the Dutch company Mud Jeans leases rather than sells clothing to customers, which are later returned, shredded, and respun into ‘new’ denim. However, despite such companies adopting aspects of the circular economy, a ‘business-as-usual’ approach will not facilitate a full economic transition.

Friends of Europe Via Flickr


Currently, most supply chains are linear, with each stakeholder only interested in their ‘link’ of the chain. For example, around 80% of a product's environmental impact is dictated by decisions made at the design stage2. Changes here could radically alter product reusability, but there is no incentive to do so: gluing together a smartphone makes it less repairable than using screws, but it also makes it thinner, which is more attractive to consumers. Therefore, any company that adopts a more circular design in isolation may find itself at a competitive disadvantage. To overcome this obstacle, several organisations have emerged to encourage cooperation along and across supply chains, such as the Ellen MacArthur Foundation and WRAP (Waste and Resources Action Programme). However, government intervention will likely be required to make any significant change.

Many nations have already adopted comprehensive circular economy strategies, such as Japan, Germany and China. However, the UK lags behind in this regard due to the fact that key policy areas are divided between different bodies, including the Department for Environment, Food and Rural Affairs (Defra), the Department for Business, Energy and Industrial Strategy (BEIS), the Treasury, local authorities, and devolved administrations. This has led to inconsistent strategies emerging across the UK. Currently, the central government and Northern Ireland do

not have formal strategies, despite supporting the concept in principle. In contrast, the Welsh Assembly formally addressed the issue with their 2016 statement Achieving a More Circular Economy for Wales, and Scotland’s Making Things Last: a circular economy strategy for Scotland document (2016) is arguably the most detailed strategy in the UK.

Although good domestic strategies are needed, it is important to remember that most supply chains are global and it will require international cooperation to make them circular. This has been recognised by the EU, which recently adopted a comprehensive Circular Economy Package. Given the economic strength of this political block, the initiatives included in this package will likely extend circular principles beyond EU member states. This is important, as the World Economic Forum estimates that a more circular economy could be worth over $1 trillion (US) to the global economy per year by 20253 Further, as circular products are designed to be an input for a new industry at end-of-life, waste is minimised and stable secondary materials markets are created. This could potentially lessen the tensions that arise between nations from the need to control territories rich in natural resources and also avoids the environmental

impacts of resource extraction and pollution. Transitioning to a circular economy could deliver many economic, environmental and social benefits. However, it will take a coordinated international approach to achieve. Consumers must change their relationship with products, and governments and industries must be willing to work together to put longterm sustainability and economic stability above short-term wins. Loopy idea, huh? This article was written by James Burgon. It was specialist edited by Lorna Christie and copy edited by Katrina Wesencraft. Artwork and layout by Gabriela De Sousa. References: [1] Xerox. 2010. Report on Global Citizenship http://xerox.bz/2iMn9nk [2] Environmental Change Institute. 2005. 40% house http://bit.ly/1TTMoiA [3]World Economic Forum. 2014. Towards the Circular Economy: Accelerating the scale-up across global supply chains http://bit.ly/1rOEcd1 Burgon, J & Wentworth, J (2016) Designing a Circular Economy. Parliamentary Office of Science and Technology. http://researchbriefings.parliament.uk/Rese archBriefing/Summary/POST-PN0536


A

s a society, we are continuously looking for the next anti-ageing "solution". The California-based company Alkahest is developing therapies from blood to try and improve vitality later in life. At the end of 2016, the company presented work at the Society of Neuroscience annual conference demonstrating that blood plasma from teenagers could reverse some of the effects of ageing in year-old mice (equivalent to approximately 50 year old humans). Researchers found that injecting mice with the blood plasma twice a week for three

weeks was sufficient to improve the mental and physical abilities of these mice compared to an untreated control group. The mice moved at a faster pace and were able to better remember their way around a maze. Sakura Minami, who presented the work, claims that the treated mice achieved similar scores to that of young mice, implying there had been rejuvenation1.

Although seemingly novel, this idea is in fact built upon previous experiments from as far back as the 1950s. Clive McCay of Cornell University performed a procedure called parabiosis where the circulatory systems of two rats were physically connected via surgery. Natural wound healing of the old and young rats allowed mixing of the pair's blood. McCay observed the bone density and weight of the older rats became similar to that of the younger rats, and their lifespan was increased by about 5 months. As regulation of animal use in research tightened, parabiosis largely fell out of research practice until a few years ago (although its use is still restricted). Currently, mice are sex, size and genetically matched and are socialised for two


weeks before the surgery, which is performed in sterile conditions with anaesthetic and antibiotics. They are able to behave, eat and drink normally, and are successfully separated after. Amy Wagers, a stem cell researcher at Harvard University, later learnt of the technique and decided to apply it to her own ageing research. The experiment was carried out in old and young paired mice and produced some amazing results. Within five weeks, Wagers saw regeneration of liver and muscle cells of the older mice at similar rates to that of the young mice. These experiments provided key evidence that young blood, or more accurately one or more factors in the blood, is able to stem the symptoms of ageing. Another group from Stanford University School of Medicine lead by Tony Wyss-Coray carried out parabiotic studies on mice to find that older mice which had been paired to young mice performed significantly better in standard laboratory tests of spatial memory. This indicates that young blood is able to improve both the physical and mental abilities of older mice.

But what is actually happening in the brains of the mice who receive young blood plasma? Wyss-Coray concentrated on the area of the brain called the hippocampus. This structure is important for both short and long-term memory and is known to reduce in size during the normal ageing process. The team found that both the density of neurons and their plasticity (the ability of the neurons to form new connections) vastly improved in the older mice after the parabiotic study. The beneficial effects, however, go further, as an astounded Wyss-Coray claims: "The human blood had beneficial effects on every organ we’ve studied so far."2 Even more excitingly, Alkahest found the same result in the elderly mice they had treated with plasma from teenage humans. They looked in the hippocampus and again saw evidence of new neurons developing, a process called neurogenesis. Amazingly, this research showed that some of the effects of ageing on the brain are actually reversible by a substance, or more probably many substances, which are present in the blood plasma of 18-year-olds3. The next question is, what exactly

is it about the blood plasma that caused this "reversing" of aging in the mice? Firstly, no negative effects were reported in the young mice after experiments, suggesting there wasn't a "watering down" of any molecules that might be causing the ageing symptoms in the older mice. Wagers conducted an additional parabiotic experiment with older mice that had cardiac hypertrophy swelling of the heart4 - to show the supply of blood from younger mice reduced the size of the heart to that of the young mice after 4 weeks. A molecule called growth differentiation factor 11 (GDF11) is found in the blood and is known to decrease with age, so Wager postulated that this could be the factor reducing the swelling of the heart. The group then took the same mouse model with cardiac hypertrophy and treated them with GDF11 injections alone to gain similar results!

However, there are likely to be more factors in the blood of the young mice causing the anti-ageing effects. Of course, Alkahest is not giving away much about which molecules they think are responsible and whether they are trying to develop anti-ageing drugs. Although, a team at Stanford School of Medicine, including WyssCor

ay, did start a study in 2014 involving volunteers under the age of 30 donating blood plasma to patients with Alzheimer’s. In November 2016, the team then published results of a preclinical trial in mice with Alzheimer's. Blood from young healthy mice was found to significantly improve the memory of the older mice affected by Alzheimer's. We spend both time and money attempting to stop the inevitability of ageing; whether it's on the latest cosmetics, dietary trends or braintraining apps. Now it seems the true answer may in fact be running through our veins.

This article was written by Emma Briggs. It was specialist edited by Alisha Aman and copy-edited by Katrina Wesencraft. Artwork by Sara Jackson. Refernces: 1.https://www.newscientist.com/article/211 2829-blood-from-human-teens-rejuvenatesbody-and-brains-of-old-mice/ 2.https://www.newscientist.com/article/mg2 2329831-400-young-blood-to-be-used-inultimate-rejuvenation-trial/ 3.https://www.ncbi.nlm.nih.gov/pmc/articles /PMC3170097/ 4.http://www.cell.com/cell/fulltext/S00928674(13)00456-X

Yale Rosen via flickr


FAST D

iabetes is a severe metabolic disease characterised by the failure of our body to maintain normal glucose levels in the blood. If untreated, high blood glucose can lead to several long-term complications like heart disease, kidney failure, damage to the eyes and foot ulcers. Nowadays, patients can receive insulin injections or other medications, such as insulin secretagogues or alpha-glucosidase inhibitors, to help them manage blood glucose levels. However, these treatments can cause blood glucose to drop too low, resulting in coma or even death. Diabetes is a global problem, with 422 million people

diagnosed with diabetes in 2016 according to the World Health Organisation1. This is predicted to double by 2030. Diabetes can be divided into two subtypes: type 1 (T1D) and type 2 diabetes (T2D). T1D has both genetic and environmental causes, is more common in young people, and is often called ‘juvenile diabetes’. T1D occurs when our immune system cells destroy the beta cells that are present in the pancreas by mistake. These cells are responsible for producing insulin following a meal. This triggers glucose absorption in peripheral tissues, like muscle and

the liver, to reduce our blood glucose levels. In contrast, T2D constitutes 90% of diabetes cases and is primarily caused by a combination of obesity, a lack of physical exercise, and contributing genetic factors. Unlike T1D, the pancreas of patients affected by T2D still produces insulin during the early stages of the disease. However, cells become ‘insulin-resistant’. This is when cells normally sensitive to insulin fail to respond to increased blood insulin levels and fail to take up glucose from the blood. Treating diabetes globally is expensive, with $612 billion (US)


spent on the disease in 2014. In the UK, approximately 3.8 million people are diabetic and the NHS is predicted to spend £16.9 billion on diabetes by 2035, that’s 17% of the total budget2. Clearly, we need to change our lifestyles, but researchers also need to improve quality of care and life for diabetic patients, whilst reducing healthcare expenditure.

Recently, the scientific community has started to realise the benefits of controlled food deprivation. During food deprivation, our body saves energy by breaking down specific tissues; reducing total energy consumption. Interestingly, once we return to a normal diet, the tissues of several systems regenerate via activation of stem cells. Researchers found that two or three days of controlled fasting can help cancer patients to overcome some side effects of chemotherapy through stem cell regeneration, producing new blood and immune cells. Based on these results, and considering the loss of insulinproducing beta cells is a feature of diabetes, Prof. Valter Longo investigated if food deprivation can help regenerate these faulty beta cells. To avoid the side effects of prolonged fasting, the researchers developed a so-called ‘fasting mimicking diet’ (FMD). This diet is low in calories, protein and carbohydrate, but high in fat; the effects of which are comparable to a water-only diet. Prof. Longo and colleagues used a mouse model that mimics T2D, which is insulinresistant in the early stage and shows beta cell dysfunction in the late stage of the disease. Putting the mice on FMD for four days a week lead to the restoration of beta cell function and reduced blood glucose levels. The mice were able to produce insulin again, and their insulin resistance was reduced. The researchers suggest that the FMD’s ability to promote beta cell regeneration in the pancreas whilst improving insulin sensitivity could be a therapeutic strategy to mitigate the late symptoms of T2D, reducing patient mortality rates. In addition, the FMD was tested in a T1D mouse model, where the

concentration of glucose in the blood returned to almost normal levels after two months of receiving FMD cycles. Similar to the results obtained with the T2D model, the beneficial effects of FMD in the T1D model were due to the activation of beta cell regeneration. During FMD cycles, the number of pancreatic beta cells begin to decrease, but once the FMD is stopped and mice return to a normal diet, a series of genes involved in cell differentiation and replication are activated. One of these genes is Neurogenin 3 (Ngn3), which activates beta cell proliferation in the mouse model, leading to an increase in insulin-producing cells. Since Ngn3 is also involved in beta cell development in humans, the researchers wondered whether the FMD could have the same beneficial effects in diabetic patients.

and T1D donors, which was driven by the activation of the Ngn3 gene3. Despite the study's promising outlook on diabetic treatment, we cannot be certain that diabetic patients would respond identically to the mouse model. A fasting mimicking diet potentially represents a valid alternative to beta cell transplantation and cellbased therapy for the treatment of both types of diabetes. Like all medical breakthroughs, we need to wait for further clinical trials to be sure that fasting is a realistic treatment for diabetic patients before we shout ‘Eureka!’.

This piece was written by Daniele Guido, specialist edited by Jiska van der Reest and copy-edited by Emily May Armstrong. Artwork by Sara Jackson.

References: To test this hypothesis, fasting experiments were performed on pancreatic cells from healthy donors and those with T1D. Cells are cultured in nutrient rich solutions, and removing specific nutrients can mimic starvation or fasting. This induced fasting stimulated insulin production in cells from both healthy

[1]http://www.who.int/diabetes/globalreport/en/ [2]https://www.diabetes.org.uk/About_us/N ews_Landing_Page/NHS-spending-ondiabetes-to-reach-169-billion-by-2035/ [3]http://www.cell.com/cell/fulltext/S00928674(17)30130-7


N

eurons get a lot of attention. Each one of us harbours roughly 100 billion of these specialised and highly interconnected brain cells which govern our inner worlds and actions. It is much less well-known that another type of brain cell exists in equal (or indeed, much greater1) quantities. These often overlooked cells are not only essential for the survival and maintenance of neurons, but have also been implicated in their demise. Glial cells (or glia), named after the Greek for "glue" are often thought to do just that: glue the central nervous system together. More specifically, glia were thought to physically surround and protect neurons, without any ability to influence brain function themselves. But we are now learning that tiny glial cells, called microglia, play a vital role in protecting the brain from infection and enhancing its performance. Microglia tidy up the waste left over when unhealthy cells

die, engulf and digest circulating pathogens in the brain, and also act to sever superfluous connections between neurons. This prevents an overgrowth of these connections and allows for optimum nerve transmission.

Astrocytes, a type of brain cell that have a star-shaped appearance from which they derive their name, are even more crucial for brain health than their microglial counterparts. They are involved in moderating the connections that neurons form with one another at synapses, which are the junctions between nerve cells. During the chemical and electrical exchanges that occur during synaptic transmission, astrocytes surround the synapse and participate in this

exchange by releasing their own chemical transmitters. This has led neuroscientists to describe these brain synapses as three way or “tripartite� synapses, acknowledging the important role of astrocytes in neural activity. Such revelations have thrust astrocytes and other glia into the limelight in recent years. Importantly, despite their recent rise to (relative) fame, glia were always considered to be the good guys; the unsung cellular heroes of the brain that quietly save the day! But how does this role fit in with the recent, and potentially groundbreaking, suggestion that microglia and astrocytes team up to kill neurons and contribute to neurodegenerative diseases? In 20122, new research emerged that described how astrocytes can be reactive in the brain in two ways: one destructive, one protective. Astrocytes in the so-called A1 class act to kill neurons through the release of neurotoxic substances, while those in the A2 class have been shown to repair neuronal connections through the release of neuroprotective molecules. But how do the good guys turn evil? In a recent study3, scientists discovered that astrocytes turned to the dark side when activated in


response to inflammation by the unsuspecting microglial cells. Researchers made use of an experimental mouse model in which the mice do not have any microglial cells in their brains. They found that without microglia, there was no activation of A1 astrocytes, indicating that the interaction between these cell types is essential for this activation. We also know that the number of microglial cells increases in several neurodegenerative diseases, so researchers went on to investigate whether A1 astrocytes could be found in these conditions as well. Indeed, they found an abundance of A1 cells in the brain tissues of patients with Alzheimer's disease, Parkinson's disease and multiple sclerosis.

Researchers also tested what happens when normal brain cells are exposed to A1 astrocytes in the laboratory. They found that A1 astrocytes caused motor neurons and nerve cells in the visual pathway to die, and they also proved toxic to another type of glia known as oligodendrocytes. These particular glial cells produce myelin, a fatty substance that wraps around the neuronal connections in our brain and allow them to communicate effectively. In multiple sclerosis, this

protective sheath of myelin degenerates, which causes brain communication to slow down and ultimately causes neurons to die. Uncovering the role of astrocytes and microglia in neurodegenerative pathologies could lead to breakthrough discoveries that will help us to understand these diseases on the molecular level and develop effective treatments. For Alzheimer's disease in particular, current treatments target protein clumps known as amyloid plaques, which essentially clog up the brain and prevent neurons from communicating. However, it is intriguing that these amyloid protein plaques are strong activators of microglia, which we now know can in turn activate the harmful A1 astrocytes. This suggests that the interaction between amyloid plaques and brain cells may be an important step in the development of this disease.

There is still a multitude of research to be done in this area to confirm and explore these findings further before we can translate them into therapeutic strategies that will benefit patients. A priority for future research will be to identify the specific neurotoxin that is released by A1 astrocytes and develop a targeted drug against it. But it’s unlikely to be as simple as it sounds. We need to remember that glial cells generally have a supportive role in the brain, so it seems unlikely that we have evolved to harbour rogue glial cells destined to destroy our brains. These astrocytes may kill brain cells for good reason, to protect us from unhealthy or dysfunctional cells, for

example. We can draw a parallel to autoimmune conditions, where an individual's immune system that normally protects them from disease, accidentally turns to fight against the body’s own cells. A1 astrocytes may have a similar protective function in the brain under normal conditions, and targeting them with drugs may compromise the immune system of our brain. Though it creates many new questions, this new research has lead to a better understanding of the interaction between the immune system and our brain, and points towards many new avenues to explore! This article was written by Kaitlyn Hair. It was specialist edited by Aisha Aman and copy edited by Jiska van der Reest. Artwork by Dan Templeton

References:

[1] https://blogs.scientificamerican.com/brainw aves/know-your-neurons-what-is-the-ratio-ofglia-to-neurons-in-the-brain/ [2] Zamanian, J. L., Xu, L., Foo, L. C., Nouri, N., Zhou, L., Giffard, R. G., & Barres, B. A. (2012). Genomic analysis of reactive astrogliosis. Journal of Neuroscience, 32(18), 6391-6410. [3] Liddelow, S. A., Guttenplan, K. A., Clarke, L. E., Bennett, F. C., Bohlen, C. J., Schirmer, L., ... & Wilton, D. K. (2017). Neurotoxic reactive astrocytes are induced by activated microglia. Nature.


E

veryone knows that smoking causes cancer. We see campaigns on TV, hear ads on the radio and sadly too many of us know first-hand what it means to lose a loved one to cancer. But what if someone asked you to explain why, would you be able to answer? What makes smoking a death sentence? You could dodge the question of course, but why miss a chance to show your friends that you’ve got the

brains as well as the looks? Science is here to help! Thanks to researchers all over the world, the nature of the link between smoking and cancer is getting clearer and clearer by the day. This is mainly due to epidemiological studies which are population-based and are often used to evaluate risk factors for a certain disease. The first papers on this topic were published in 1950 and since then over 60 carcinogens have been identified in cigarette smoke. Some of these chemicals damage our DNA and others prevent our cells from repairing the damage; by doing so they shut down the mechanisms that protect us from cancer. A smoker will also accumulate toxins in their body that eventually weaken their immune system, and so they are unable to fight off cancer when it arises1. So we know the effects the chemicals contained in cigarettes have on our body, but how do they physically cause the damage? A study published in Science helps us to understand a bit more. The study is the result of an international collaboration between the Los Alamos National Laboratory (USA), the Wellcome Trust Sanger Institute (UK) and several other laboratories around the world2. Perhaps the most remarkable outcome is that scientists were able to calculate the precise number of mutations generated after smoking a pack of cigarettes daily for a year. Smoking 20 cigarettes a day will generate: 150 mutations in the lungs; 159 in the mouth and tissues in close proximity; 18 in the bladder and 6 in the liver. Even if you smoke much


less than that, you will still be acquiring many mutations. The damage will just accumulate over a longer period of time. This is irreversible – these mutations will remain in your DNA forever.

The authors reached this conclusion after comparing more than 5000 whole genome sequences. This is the complete DNA sequence contained in every cell of our body that defines our unique combination of traits. DNA sequences from both smokers and non-smokers affected with different types of cancers were analysed to check for ‘mutational signatures’, a set of unique combinations of mutations that are the result of the same mutational process. That is to say, if we expose DNA to the same chemical agent, for example, we will always obtain the same set of mutations. So through the analysis of the mutational signatures present in each patient, the authors managed to extrapolate a total of four main possible mechanisms through which smoke could have caused the particular signature seen only in smokers. Some of these mechanisms of action are somewhat expected and are related to chemicals present in cigarettes, but there are also unexpected findings that enable us to explain how smoking can cause cancer even in tissues that are not directly exposed. Even though this study is a big step forward in understanding the contribution of smoking in cancer formation, for Sir Mike Strutton (from the Wellcome Trust Institute and co-author of the journal article),

NIH Image Gallery via Flickr

In the words of the main author, Ludmil Alexandrov from the Los Alamos National Laboratory, the importance of the discovery resides in the potential to change the approach of researchers to smokerelated cancers: "Before now, we had a large body of epidemiological evidence linking smoking with cancer, but now we can actually observe and quantify the molecular changes in the DNA due to cigarette smoking”3.

there is still work to be done. Indeed, the deeper we get in understanding the biological changes caused by smoke, the more intricate the network underlying them appears. This is true for many types of cancer, and this new piece of research highlights the great potential that DNA sequencing can offer. Not only in terms of diagnosis, but more importantly in prevention.

recognised as a social behaviour. After reading the Science paper, the consequences of that single cigarette every once in a while are more real than some hypothetical future disease. Some of these consequences could be long-lasting too. We still don’t know if smoking leaves epigenetic traces, meaning that some mutations could be inherited by our children even if they’re too young to know what a cigarette is. So, while cancer researchers continue their work to find answers to the most difficult questions, we can do our own bit of thinking and answer a much simpler question: is that cigarette with friends really worth it?

The possibility of quantifying the damage to DNA after each cigarette can lead to some meditations on the effect that our smoking habits will have. Even not being a regular smoker, how many times have you found yourself with a cigarette in your hand while out for a pint or partying with friends on a Saturday night? Most of the time, this happens because smoking is

This article was written by Giusy Caligiuri. It was specialist edited by Alisha Aman and copy-edited Katrina Wesencraft. Artwork by Sara Jackson References: [1]http://www.cancerresearchuk.org/aboutcancer/causes-of-cancer/smoking-and-cancer [2]http://science.sciencemag.org/content/35 4/6312/618 [3]http://www.sanger.ac.uk/news/view/smo king-pack-day-year-causes-150-mutationslung-cells


T

hink of ancient DNA and the first thing that may come to mind is Jurassic Park and cloning dinosaurs by freeing blood from amber-trapped mosquitoes! Sadly, scientists think that dinosaur DNA is probably just too old and degraded to use - sorry to burst that bubble. But researchers are working with ancient DNA (known as aDNA), painstakingly isolated from archaeological remains, in lots of other exciting ways. Taken from humans and our closely related ancestors like Homo erectus and Homo neanderthalensis (the Neanderthals), aDNA can teach us about evolution. From our earliest crops, it shows us how we developed agriculture and migrated across the world. It’s also possible to recover aDNA of the pathogens which have plagued us (pardon the pun) throughout time. Trace amounts of bacteria can be recovered by taking material from human remains that may have been infected. By sequencing and analysing this bacteria, we can learn about disease evolution, origins and changes over time. This article will look at three different examples and what aDNA is telling us about them already – or may be able to tell us in the future.

Migrations of the Plague Plague, caused by the bacteria Yersinia pestis, has a strong hold on the public imagination. From creepy doctors dressed head-to-toe in

protective gear, to carts of dead bodies in the streets; it’s no wonder it fascinates historians. Plague has repeatedly swept across Europe and Asia since the 6th century, divided by historians into three major pandemics1. The Justinian plague began in the 6th century, and over the next two hundred years caused more than 25 million deaths. The second plague began with the infamous Black Death (1346-1353) in London. Outbreaks then continued across Europe over the next 400 years. The third pandemic began in China in 1860s and eventually reached many other regions of the world via trading ships. Plague is, in fact, still ongoing in some areas: in rodent populations of Arizona, for example, and in ongoing outbreaks in Madagascar2. Historical documents and human remains are limited in what they can tell us, leaving some questions unanswered. One mystery is how the plague persisted for so long in Europe before vanishing for over a century. It was generally assumed that trade routes resulted in the bacteria being reintroduced into Europe from Asia over and over again, causing multiple outbreaks but this is now coming under question from recent studies using aDNA3. Changes in the DNA sequences from organisms can be used to build an evolutionary family tree, known as a phylogenetic tree. Researchers took sequences of Y. pestis strains from victims of the Black Death at the beginning of the pandemic, from outbreaks in the middle (16th century), and the end (18th century). These were then placed into a phylogenetic tree to investigate their relationships. It was found that all three were on the same branch, with

both the later outbreaks found downstream from the Black Death strain – like children on a family tree. This challenges earlier thinking as it suggests that the later strains evolved in a lineage from the first introduction of plague; evidence that rather than being reintroduced, plague may have found a foothold in Europe and developed locally.

Origins of Tuberculosis Tuberculosis is a more familiar disease in the modern world, caused by bacterium Mycobacterium tuberculosis. For a long time it was thought that the disease came from domesticated cows, who have their own version of the bacteria called Mycobacterium bovis. The hypothesis was that, as humans had been in close association with their animals, the pathogen that affected them evolved to infect us as well. Various genetic approaches have been used to show that this probably isn’t the case, at least in the Americas. Comparing genomes of M. tuberculosis and M. bovis shows that the human version of the disease predates that of cows4. So who is the culprit? Recent genetic analysis finds a surprising likely source for the disease in the New World; cute and cuddly seals5. Placing the genomes of M. tuberculosis found in 1,000 yearold Peruvian skeletons onto a phylogenetic tree showed that they shared a common ancestor with Mycobacterium found in seals rather than M. bovis. Researchers now suggest that seals spread the disease across the ocean, between African hosts and the Americas.


Otis Historical Archives National Museum of Health and Medicine via Flickr

A Question of Syphilis The origins of syphilis within Europe are murky, and wrapped up in our history of colonialism. The first known major outbreak occurred in Naples, Italy in 1494. But what happened to trigger this? One theory is the Columbian hypothesis, which points out that Christopher Columbus returned from his exploration of the Americas a few years before this outbreak and may well have brought syphilis back with him, introducing it to Europe. However, this theory is controversial and has been debated since its proposal centuries ago. Others believe that syphilis was always present, or that closely related diseases were present and mutated around that time. Trying to use historical records to prove either way is difficult, as definitions of disease – and diseases themselves - change over time. Syphilis does leave behind bone changes, which archaeologists are able to analyse. Some skeletons which appear to have syphilis have been found in Europe which date

before Columbia made his trip, suggesting the venereal disease was there already. But diseases in the same family leave behind similar marks, making it hard to determine if these people really did have syphilis. Confusing remains are an excellent target for aDNA, which could finally tell us for sure whether Treponema pallidum (which causes syphilis) was really present in Europe earlier than that first outbreak. However, research so far has found it very difficult to recover any traces of the bacteria, which is very fragile. This story, then, ends on a hope for the future. Genomics is a young field, as is the study of aDNA. Opening up new resources other than archaeological human remains, such as museum’s medical collections, could give us further insights into the past. Other research uses a dizzying array of sources to investigate history; parchment, paintings and clothes being some examples. With DNA sequencing technologies constantly and rapidly improving, one day we may be able to find the answer to this question and many others like it.

This article was written by Frances Osis. It was specialist edited by Joseph Yeoh and copy-edited by Katrina Wesencraft References:

[1]https://www.cdc.gov/plague/history/index .html [2]https://www.theguardian.com/globaldevelopment/2017/oct/19/madagascarplague-death-toll-reaches-74 [3]Spyrou, Maria A., et al. "Historical Y. pestis genomes reveal the European black death as the source of ancient and modern plague pandemics." Cell host & microbe 19.6 (2016): 874-881 [4]Galagan, James E. "Genomic insights into tuberculosis." Nature Reviews Genetics 15.5 (2014): 307-320 [5]Bos, Kirsten I., et al. "Pre-Columbian mycobacterial genomes reveal seals as a source of New World human tuberculosis." Nature 514.7523 (2014): 494-497


It's been a while since our last issue! In the meantime, it's been all change in theGIST board. We've had new video and podcast producers, social media co-ordinators, finance and design, and of course our editors-in-chief! With that comes bundles of enthusiasm and brand new ideas for theGIST moving forward - we're back and better than ever!

If you were down at Glasgow's Riverside Museum of Transport and Travel back in September, you might have spotted some familiar faces. Some of theGIST members headed down to do their part for STEM outreach, introducing members of the public to bacteria through a microscope, and some pipe cleaners...

Another student event we're proud to be working with this year is Let's Talk About [X], a multidisciplinary conference that allows undergraduate students from the University of Glasgow to present their research. We'll be going along to the event to take a look, and also hand out some issues of our latest magazine (hello new readers!)

Editor-In-Chief (UofG) Gabriela De Sousa

Head of Podcasts Annabel May

TheGIST are once again proud to be one of the sponsors for TEDx University of Glasgow this year. The event's theme this year is 'Press Pause To Begin', and speakers will include a neuroscientist, social activist, and world-renowned comic book artist! We expect it will be a great event as always, and will be there to interview some of the speakers to feature on the-gist.org. If you've picked up this magazine at TEDx, hello! We hope you've enjoyed reading our magazine.

In October, theGIST proudly partnered with Cancer Research UK and the Beatson Institute at the University of Glasgow to hold a charity fundraiser at the Drygate Brewery. The night featured key scientists working in cancer research, and the public got to hear first hand about the latest ideas and developments happening in the research centre.

Editor-In-Chief (Strathclyde) Head of Submissions Katrina Wesencraft Derek Connor

Head of Videos Aaron Fernandez

Head of Social Media Miriam Scarpa

Head of Copy Editing Kirsten Munro

Head of Social Media Kaiser Saeed

Deputy Submissions Costreie Miruna

Snippets Editor Richard Murchie

Head of Design Sara Jackson

Head of Events Daniele Guido

Head of Finance Anna Duncan

Head of Web Matt Mitchell




Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.