Oxford & Wimbledon Leading Scholarship
Edition V: Journeys November 2019
The Journal of the Academic Scholars of Oxford and Wimbledon High Schools
OWLS Quarterly, Edition V, November 2019
Journeys For this issue we have been inspired by the story of the Silk Road – an ancient trade route connecting East and West from the second century BCE to the eighteenth century and a means by which ideas and culture spread and mixed. In this our Journeys of Discovery edition of OWLS Quarterly, student writers have considered journeys both real and metaphorical, and those of ideas and things, people and cultures. Their thinking reflects a diverse field of interests; travel through time, the journey of the number zero, the journey of the steam engine through poetry, the development of the NHS, and the psycho-geography of the modern-day immigrant experience (to draw out only a few of their ideas). Ms Rachael Pallas-Brown (OHS) and Dr John Parsons (WHS) – Editors OWLS Quarterly
CONTENTS Is Time Travel Possible .......................................................................................................................... Page 3 The Evolution of the Universe through time ................................................................................ Page 5 Journey of the number zero .................................................................................................................. Page 7 The journey of the steam engine through poetry....................................................................... Page 9 The invention of the Printing press ............................................................................................... Page 11 The journey of the NHS ......................................................................................................................... Page 14 Journeys of discovery – How are vaccines developed? ......................................................... Page 16 The Journey of treating infections ................................................................................................... Page 18 Discovering P53 ........................................................................................................................................ Page 20 The psychogeography of the contemporary immigrant experience ............................... Page 22 Journey through the healthcare insurance system in Germany and in the UK .......... Page 24 Journeys of discovery: Organ transplantation ........................................................................... Page 26 The discovery of DNA ............................................................................................................................ Page 28 A mission to Moscow.............................................................................................................................. Page 30 How coming -of -age literature inspires self-discovery ........................................................ Page 32 Does the discovery of Microplastics in the deep ocean mean the plastic sea waste problem is more serious than we realise? ..................................................................... Page 34 Discovering democracy: colonial involvement in the political journey of the Democratic Republic of the Congo .................................................................................................. Page 38 From viscous beasts to Man’s best friend: a story of evolution ........................................ Page 41
IS TIME TRAVEL POSSIBLE? Amelia Hayes (WHS) From Hindu mythology to science fiction time travel has always interested humanity, but why is this and is it really possible? Time travel is described in many books and films all with varying techniques some seeming more plausible than others. The idea of time travel first appeared in the Hindu myth Mahabharata1 which is believed to have been written in 400BC. In the story King Kakudmi is struggling to decide on a suitable husband for his only daughter, as he believes that no one could be good enough for her. Therefore, he decides to go with his daughter to visit the god Brahma for advice, however when he asks the god who she should marry Brahma just laughs and says that all the possible husbands have died. He then explains that this is because time runs differently in Brahmaloka (where Brahma lives) so in the time they had waited to see him, 27 chatur-yugas (4,320,000 earth years) 2 had happened. I believe that this rather interestingly demonstrates how humanity has always had an inherent curiosity especially in things believed to be impossible such as time travel. Moreover what I find extremely surprising about this story is its similarity to the current day idea of relativity in which the apparent speed of time depends on your frame of reference, as in the story Kakudmi is unaware of the difference in time as in his frame of reference (Brahmaloka) it appears to be travelling at the same speed. The popularity of the idea of time travel really took off since the publication of H. G. Well’s book The Time Machine which was published in 1895 and is in fact where the term ‘time machine’ comes from. Since then it has been the theme for many fiction books as well as comics, films and TV programs. For example, in the Flash comics by DC the character Eobard 1
http://the-wanderling.com/mahabharata.html https://en.wikipedia.org/wiki/Yuga#/media/File:T ime_Units_in_Hindu_Cosmology.png 3 https://en.wikipedia.org/wiki/Cosmic_treadmill 4 https://www.quora.com/How-does-flash-timetravel 2
Thawne from the future becomes Barry Allen’s main baddie when he travels back in time to Barry’s childhood in order to murder his mother. Thawne does this by using a cosmic treadmill that works by generating vibrations whenever someone with super speed travels on it3 that sends them back in time, however it seems to be slightly vague on how this machine works so I do not think this method would actually be possible. However other characters in The Flash such as Wally West manage to travel in time by traveling faster than the speed of light4 which may be slightly more plausible. In the extremely popular BBC science fiction TV programme Doctor Who the Doctor travels by using his machine the TARDIS. The doctor can time travel because he is a time lord; a special race of being who have their own laws of time5. One of these laws is that there are special events in history that are fixed such as the eruption of Vesuvius in 79AD, this reflects on the idea that even if small things were to be changed in history they could have major effects to the present day. However, the idea of time lords is very much fiction so this could not actually be a mode of time travel. However as I have described above one can tell that often the science in these stories is a bit iffy so the question is if one could time travel without braking the laws of physics. Each and every day we all time travel as we go through our lives, however we do this at a rate of one hour per hour6. So if we wanted to travel forwards in time we would have to travel 1 year in say 50 years and for backwards time travel 50 years in 1 year. In physics the idea of time travel first seemed to become a possibility after Albert Einstein’s idea of relativity which demonstrated that one’s velocity affected when and in what position something had happened. In the concept of special relativity Einstein stated that there is something called a spacetime continuum in which the space and time have been brought together into one7 so the world has four dimensions, it being the fourth one. He stated that the speed of light was constant in all frames of reference and that nothing could go 5
https://www.digitalspy.com/tv/cult/a870656/doc tor-who-time-travel-rules-explained-changinghistory/ 6 https://spaceplace.nasa.gov/review/dr-marcspace/time-travel.html 7 https://simple.wikipedia.org/wiki/Space-time
faster than this speed (3.0 x108 ms-1). Therefore the idea of how Wally West in the flash time travels would be impossible as for something to travel faster than the speed of light as they would have to have infinite mass. However, if you were to travel very close to the speed of light for example 99.5% of it, time would go slower for you and when you would return you would notice that your friends that you left behind have aged far more than you. Einstein then in 1915 put forward the idea of general relativity which is the idea of curved time and space. He stated that for an object time travels more slowly when it is in a gravitational field8. This theory can be seen by the mass of the sun causing light that passes close to it to bend slightly.
Figure 1:https://blackholecam.org/wpcontent/uploads/2016/07/general_relativity.jpg
does, in order to do this you need matter that has negative energy and mass. The idea of Quantum Theory allows for negative energy as long as there is positive energy somewhere else in the universe. It allows this because of the uncertainty principle which was proposed by Heisenberg it states that the velocity and position of an object cannot both be measured accurately at the same time9. This means that empty space can’t have values of zero as then they would both be measured accurately, therefore people interpret this to mean there are fluctuations in this empty space caused by particle and antiparticle pairs appearing and then annihilating. Then using the Casimir effect in which you imagine two parallel metal plates which are a small distance apart that act as mirror for the particles and antiparticles and the space between therefore only admit light waves of a certain frequency. This means that the in between the plates there is less vacuum fluctuations which means it has a lower energy density than outside the plates and as outside is empty pace so has an energy density of zero this means in between the plates must have negative energy. In conclusion this means that time travel could become possible in the future however this brings up the question that if it does become possible then why have we not been visited by people from the future. Stephen Hawking rather humorously demonstrated this in 2009 by hosting a party for time travellers (he sent out the invitations after the party) and no one came.
Space-time is warped a lot more near black holes as they have far higher gravitational fields however if we were to go through them we would end up being ‘spaghettified’ and all our atoms would be pulled apart so even if this could allow for time travel humans would never be actually able to go through. However, we could form something called a wormhole by warping space time. Wormholes can connect places across the galaxy as they cause trips through space time. By going through a wormhole you momentarily exit the universe and then appear at another location and time. In order to create a wormhole you have to warp the space time in the opposite direction that gravity 8
https://spaceplace.nasa.gov/review/dr-marcspace/time-travel.html
9
https://www.britannica.com/science/uncertaintyprinciple
THE EVOLUTION OF THE UNIVERSE THROUGH TIME Jessica Saunders (OHS) We all know that the birth of the universe started with the Big Bang. But how exactly did it get from the beginning to now? And what’s in store for it in the future? Although the universe is generally accepted as being pretty much infinitely large, it started out 13.77 billion years ago as an extremely hot singularity that expanded faster than the speed of light. During the first picosecond, the four fundamental forces evolved in the following order: gravitational force, strong, weak then electromagnetic interaction. Space expansion and super cooling followed as well. It is believed that the super cooling took place because the strong and weak interactions separated. The next phase that lasted 377,000 years, introduced subatomic particles. At one second, neutrino decoupling occurred (neutrinos stopped interacting with baryonic particles, therefore not influencing the dynamics of the universe anymore). Protons and neutrons started to form, and at 3 minutes, nucleosynthesis had 25% of protons and neutrons fuse into heavier elements, mainly He4. The universe cooled sufficiently for atoms to form, and became transparent for the first time. At 1 billion years the Dark Ages succeeded the early universe epoch. This period was called the Dark Ages because there were no photons in the universe except those produced by photon decoupling (low mass elements reached their ground state, releasing photons in the process, providing us with the oldest picture of the universe: the cosmic microwave background [CMB]). The decoupled photons redshifted to non-visible wavelengths from a pale orange glow, causing for the universe to become transparent for the next 3 million or so years. Hydrogen clouds started to collapse to form stars and galaxies at around from 400-700 million years. Large structures like dark matter filaments started to form. The earliest stars
(Population III stars), when they reached the end of their lives, exploded in spectacular supernovae, and hence produced the first heavier elements that led to the shaping of the universe that we see today. Stars formed galaxies, which formed clusters, superclusters, then filaments, creating a network of intricate structures. At first after inflation the universe was radiation dominated; the dynamics of the universe were set by radiation (mainly photons and neutrinos). Later on matter dominated the universe-its energy density exceeded the energy density of radiation and dark matter. However, as we know now, dark energy constitutes the majority of our universe. Since at the 9.8 billion year mark when dark energy became prevalent, the universe’s rate of expansion has been increasing, which gives rise to multiple theories of the universe’s demise. And here we are now,4.3581017s since the birth of our fine-tuned-for-life universe. But the story has not ended yet. In 100 trillion years from now, the Degenerate Era will follow, causing for the ceasing of star formation. Some Grand Unified Theories (GUTs) suggest that the proton will decay at some point in the distant future. Substellar objects i.e. planets will decay to hydrogen, releasing energy in the process. Eventually all matter will become leptons and gamma ray photons through proton decay. All black holes will eventually evaporate, producing photons, gravitons, and electrons & positrons (which will instantaneously annihilate). Eventually, these particles will be the only particles in the universe. Eventually there will be an entropy decrease (which violates the second law of thermodynamics) by the Poincare recurrence theorem, or through thermal fluctuations. If the vacuum state decays into a lower energy state (only if it’s currently a false vacuum), macrophysics will cease to exist, and quantum physics will prevail. Eventually a “heat death” will follow. To avoid the heat death, the universe will have to undergo random quantum tunnelling and quantum fluctuations. This means that the possibility of another Big Bang is non-zero (but will happen in approximately 10101056years). Other theories for the end of the universe include Big Rip, Big Crunch, Big Bounce, and the False Vacuum Collapse Theory.
The Big Rip occurs in the case that phantom dark energy exists (phantom dark energy has a negative equation of state: it has a more negative pressure than the cosmological constant [the energy density of space], and has negative kinetic energy), leading to a steady increase in the Hubble Constant H0. This means that all matter will disintegrate into elementary particles and radiation (the phantom dark energy is causing for the rate of change of acceleration (jerk) of the expansion of the universe to increase), ripped apart by the phantom energy force. The False Vacuum Collapse Theory involves the Higgs field. The Higgs field permeates the universe. It varies in strength based on its potential. A true vacuum exists when the universe is in its lowest energy state. If a false vacuum exists i.e. the universe is not in in its lowest energy state, it could undergo vacuum decay (quantum tunnel into a lower energy state). This results in the changing of the physical constants including the universal gravitational constant G, the charge on an electron, and the fine structure constant; this will affect the foundations of energy, matter and even space-time. All structures in the universe will be obliterated instantly. Scarily, the changing of the Higgs field could occur at any moment. However, the probability is close to nil, and if it is to happen, it won’t be until sometime later. The universe has undergone the biggest and longer journey in all of history, but its journey is yet to be completed. Cosmologists have come up with multiple theories that can give us ideas about the start (and before) and the end of the universe (and possibly after), but we can never be 100% sure about anything. Therefore, the truth about the beginning and end of this journey shall remain undiscovered‌
Bibliography https://www.space.com/13219-photos-bigbang-early-universe-history.html https://en.wikipedia.org/wiki/Chronology_of_t he_universe
JOURNEY OF THE NUMBER ZERO Elena Gupta (WHS) Zero means nothing – how can nothing mean something? We live in a world in which the position of a number denotes its value. In this decimal system, the number zero has two main purposes: to represent an empty position (a placeholder) and “nothing”. Before the introduction of the number zero and a place value system, writing very large numbers was a problem. Long before the concept of zero was fully developed, the Babylonians (around 1770 BC) had a sexagesimal (base 60) system which seemed to use commas as place holders. For them 1, 25 would mean 1 x 60 +25 = 85 and 1, ,25 would mean 1x60x2 + 25 = 3625. This was not only quite confusing, but also increased the chance of misunderstanding. This is due to them leaving a gap to indicate zero. Most of the calculations needed context to be understood. However, by about the time of the conquest by Alexander the Great (331 BC), they developed a special sign consisting of two small wedges placed obliquely. This was invented to serve as a placeholder where a number was missing. This zero symbol did not end all confusion though, as there is no evidence of this sign appearing at the end of a number. This implies that the Babylonians never achieved an absolute positional system. The Mayans (around 36 BC) were the first to fully utilize the principle of a place value system. They had a vicesimal (base 20) positional system, and used a half quatrefoil shape to symbolise zero. By the 4th Century, the system changed to base 10, and revolved around characters resembling a knotted cord. The zero was represented by the absence of a knot. This custom did not spread beyond Mesoamerica. Around 130 AD, Ptolemy used the Babylonian sexagesimal system and also 0 as the empty place holder in his work “Almages”. The symbol O for zero was taken from the Greek
word “ouden” meaning nothing. This book concentrated on astronomy, and displayed most of Hipparchus’ work on the geocentric model of the solar system. This use of the zero however was only limited to showing fractions of time (minutes and seconds). The zero as we know it today was only introduced around the 2nd Century BC by the Indians. For the first time the value of nothing was given a real value. A renowned mathematician by the name of Varahmihir used the place value system containing zero many times in his book Panchasiddhantika (written in 575 AD). This is believed to be the use of zero (as it is used today) for the first time. It was then called “Sunya”, meaning void or empty. It was the Indian mathematician Brahmagupta who in his book Brahma-Sphuta-Siddhanta (written in 628 AD) defined zero, where he writes: 1. The sum of zero and a negative number is negative, the sum of a positive number and zero is positive; the sum of zero and zero is zero. 0+ (-a) = -a, a+0 =a, 0+0 =0 2 A negative minus zero is a negative. A positive minus zero is a positive .Zero minus zero is a zero. A negative subtracted from zero is a positive. A positive subtracted from zero is a negative. (-a)-0= -a, a-0 =a, 0-0 =0, 0-(-a) = a, 0- (a) =-a 3. The product of zero multiplied by a negative or positive is zero. The product of zero multiplied by zero is zero. 0 x (±a) = 0, 0 x 0 = 0 4. Zero divided by zero is infinity 0/0 is ∞ Additionally, Mahavira, an Indian mathematician, wrote about the operation of zero in his Ganita Sara Samgraha (800 AD). “When any number is multiplied by any zero, the result is zero, or when zero is added to or zero is subtracted from any number, the result is always the same.” a+0=a a-0=a a x 0=0 The first record of public use of zero can be found on the inscription on a stone tablet at the
town of Gwalior in India (written around 876 AD), where the number 270 and 50 are written as we write them today; the only difference is that the 0 is smaller and slightly raised. It then made its way to the Arab empire with the name “Sifr”meaning vacant, where it was then used by Al-Khwarzimi in his book “On the calculation with Hindu numerals” (830 BC). This is one of the oldest surviving manuscripts using Hindu numerals. The terms algebra and algorithms were then derived from his name during translations of the Arabic text into Latin. Al-Khwarzimi is often referred to as the “Father of Algebra” for his strong belief in using the Indian numerical system and completely revolutionising the Islamic and Western world by using the numbers 1 to 9. He is held responsible for popularising the number notation that we use today. It was lastly introduced to Europe by Fibonacci, who started using it in his equations after his travels to the East. In his book “Liber Abaci” he translated the word for zero as “zeuro”. He stated: “The nine Indian figures are: 9 8 7 6 5 4 3 2 1. With these nine figures, and with the sign 0 ... any number may be written.” His use of the word sign emphasises how the number zero was used for arithmetic, to do operations like addition and multiplication. It was used in European mathematics only after the 12th Century. This late incorporation of the zero into the system is largely due to the belief in the Aristotelian Doctrine, which proved the existence of God. The doctrine didn’t acknowledge the void and infinity, and so didn’t acknowledge the number zero. Christianity agreed with Aristotle’s view and questioning him meant questioning the existence of God. In the 1400s, the zero was popularised throughout Europe by merchants using it illegally to help with their trade. As it arrived to England, it was given the name “cipher”. This concept of emptiness became fundamental in the 1600s. Sir Isaac Newton and Wilhelm Liebniz developed calculus by working with numbers approaching to zero. Without it we would not have the basis of Physics, Engineering and many aspects of computing. Algebra, algorithms, and calculus, three pillars of modern mathematics, are all the result of a notation for nothing.
Bibliography Mallika Singh – Zero’s Journey from nothing to everything. Soulveda, Across Cultures. Amir Aczel (2014) – The Origin of the Number Zero. Smithsonian.com Luke Mastin (2010) – Indian Mathematics. The story of mathematics Evelyn Lamb (2014) – Ancient Babylonian Number System had no Zero. Roots of Unity, Scientific American Njord Kane (2016) – The Ancient Maya understood the Value of Zero. Readicon.com
THE JOURNEY OF THE STEAM ENGINE THROUGH POETRY Laura Fletcher (WHS) Steam engines were first introduced in the 1770s by inventor and mechanical engineer James Watt as the industrial revolution stimulated developments in engineering. In 1784, William Murdock developed the steam carriage powered by a high-pressure engine and later shared his ideas with his neighbour Richard Trevithick who went on to build locomotives. Swansea was home to the first fare-paying passenger railway service, opened in 1807 and known as the Swansea and Mumbles Railway, as steam replaced horsedrawn transport. Since then, poetry has documented the intrigue and anxieties of British people as their landscapes and traditional ways of life changed dramatically. Not in vain the distance beacons. Forward, forward let us range, Let the great world spin for ever down the ringing grooves of change. Thro’ the shadow of the globe we sweep into the younger day; Better fifty years of Europe than a cycle of Cathay. Within Alfred Lord Tennyson’s dramatic poem Locksley Hall (published 1842), as the protagonist muses on the beauty of civilisation and progress, we gain an insight into Tennyson’s own view of the new railways of Britain. The poem contains 97 rhyming couplets in trochaic octameter (eight metrical feet consisting of one stressed syllable followed by an unstressed one) but the last unstressed syllable is eliminated; a meter that seems to advance with the progress of modernisation and movement of the trains. There is an urgency in the repeated ‘forward’ and the active verbs of ‘spin’ and ‘sweep’.‘Grooves’, however, is an error in Tennyson’s metaphor for life as a train journey. This is on account of Tennyson’s
mistaken belief that trains ran in grooves rather than on rails and he later admitted, ‘It was a black night and there was such a vast crowd round the train at the station that we could not see the wheels.’ Still a recent invention, the railway held much mystery for the British public. Two years later, on 15th October 1844, the poet William Wordsworth wrote to the Prime Minister William Gladstone to oppose the proposal for a Kendal and Windemere railway, and with his letter he enclosed this sonnet: And is no nook of English ground secure From rash assault? Schemes of retirement sown In youth, and ‘mid the busy world kept pure As when their earliest flowers of hope were blown, Must perish; - how can they this blight endure? And must he too his old delights disown Who scorns a false utilitarian lure ‘Mid his paternal fields at random thrown? Baffle the threat, bright scene, from Orrest head Given to the pausing traveller’s rapturous glance; Plead for thy peace thou beautiful romance Of nature; and, if human hearts be dead, Speak, passing winds; ye torrents, with your strong And constant voice, protest against the wrong! Reacting to the ‘railway mania’ of the mid1840s that was characterised by frenzied investment in railways and the construction of approximately 9000 miles of track, Wordsworth hoped to use his power as the recently appointed Poet Laureate to protect the rural beauty of the Lake District. The critic Nicholas Dames recognised a nostalgia in 19th Century literature as writers were ‘struggling to transform the chaos of personal recollection into what is useful, meaningful, able to be applied to the future’ and the concern of what is ‘utilitarian’ is key in the poem. Wordsworth’s letter argues that the railway serves no useful purpose as bringing travellers into the district would destroy the beauty they had come to enjoy. Wordsworth also draws on the patriotism of the public in the assertion of the land as ‘English ground’ and uses both idealised natural figurative language and imagery and the form of the traditional Shakespearean sonnet to
embody the essential Englishness he means to protect. The satirical Punch Magazine pointed to the practical dangers of early train travel in The Railway Nursery Rhymer published in 1852. Air – Hush-a-by Baby Rock away, passenger, in the third class, When your train shunts a faster will pass; When your trains’ late your chances are small – Crushed will be carriages, engine, and all. Air – Dickory, Dickory, Dock Smashery, mashery, crash! Into the “Goods” we dash: The “Express,” we find, Is just behindSmashery, mashery, crash! These humorous parodies of children’s nursery rhymes reveal the incidents and inefficiencies aboard early steam trains at a time when health and safety had not caught up with modern engineering. The former presents the dangers of third-class travel as accidents were common when faster trains passed slower ones. The latter, a dramatic train crash, undermines the view of steam trains as efficient and commercially useful. Whilst many 19th Century poets conveyed the chaos and danger of British modernisation, Edward Thomas recorded a moment of tranquillity at a train station in his poem Adlestrop (1914). Writing six weeks before the outbreak of the Great War, Thomas describes his unscheduled stop at Adlestrop station in Gloucestershire: …What I saw Was Adlestrop – only the name And willows, willow-herb, and grass, And meadowsweet, and haycocks dry, The listing of British flora conveys an almost Wordsworthian reverence for the British landscape, and yet the steam train in this poem provides opportunities for discovery of the countryside, perhaps suggesting that the coexistence of the old and new is possible. In contrast, Wilfred Owens’ The Send-off (1918) presents a very different view of the railway, as he presents the horrors of war as soldiers ‘lined the train with faces grimly gay’:
Then, unmoved, signals nodded, and a lamp winked to the guard. Whilst at a training camp in Ripon, Owens wrote this poem that shows trains and machinery as agents of war. The sinister personification of the signals and lamp in winking and nodding suggest that men such as the guard are working in collusion with technology, and yet the technology is ‘unmoved’ and emotionally detached from the brutality of war. As railways changed in their uses and the public opinion of trains altered throughout the 19th and 20th Centuries, the contemporary poetry changed too. Whilst modernisation brought anxieties for the British public, poetry helped people to process these rapid changes to their lives and landscapes.
Bibliography Day, Aidan (ed.) (2007). Alfred Lord Tennyson: Selected Poems. London: Penguin Books Ltd Curzons, Rachael (2014) Remembering The Railway: Locating Nostalgia In Wordsworth's 'Suggested By The Proposed Kendal And Windermere Railway' And The Creation Of The Musée D'orsay https://open.conted.ox.ac.uk/sites/open.conted. ox.ac.uk/files/resources/Create%20Document/ VIDES%202014%20section%20028%20Rach ael%20Curzons.pdf Green, Mark (2010) Punch and Victorian Railway Poetry https://blog.railwaymuseum.org.uk/punch-andvictorian-railway-poetry/ Cooke, William (ed.) (1997) Edward Thomas: Everyman’s Poetry Walter, George (ed.) (2006) The Penguin Book of First World War Poetry. London: Penguin Books. Wilfred Owen, ‘The Send-off’
THE INVENTION OF THE PRINTING PRESS Rosie Leeson (OHS) Almost 600 years ago, Gutenberg created his version of the printing press - an invention that has been described as ‘one of the most influential ... in the second millennium.’10 In a world where access to literature, newspapers, and other forms of media have become an accepted normality, it can be difficult to imagine a time in history when the majority of the population could only hear and experience these writings through word of mouth.11 The printing press made easy access to current thoughts and ideas available to a whole new audience, allowing ordinary people across the globe to make their own journeys of discovery in terms of literature, religion, politics, and more, for the very first time. It is important to understand that methods of printing had been around for hundreds of years prior to the introduction of Gutenberg’s invention in the 1430s. East Asia had practised their own methods of printing since the Tang industry in the 7th and 8th centuries, and woodblock printing was firmly established in Europe by the 14th century.12 However, all of these methods were laborious, time consuming and expensive, meaning that the books themselves were ‘rare and very costly.’13 10
Wikipedia. 2018. “Printing Press.” Last modified 22nd December 2018. https://en.wikipedia.org/wiki/Printing_press 11 Lucien Febvre and Henri-Jean Martin, The Coming of the Book: The Impact of Printing 14501800 (London: Verso, 1997), p.23 12 Wikipedia. 2018. “Printing Press.” Last modified 22nd December 2018. 13 Roy Strong, The Story of Britain (London: Pimlico, 1998), p. 150
Until the late 14th century, animal skins were used instead of paper as the medium for writing on, meaning the process of rearing the animals and then preparing their skin for writing was required for every single book being printed.14 Printing in Britain was particularly unproductive at the time, with almost every English text being ‘laboriously copied by hand’15 by ‘scribes in monasteries or other workshops,’16 who were part of the minority of Britain’s literate population. The fact that the majority of the population was illiterate explains further why news and stories had to spread by word of mouth in the 14th and 15th centuries - people were unable to understand any text put before them. Whilst this issue could have been amended with the introduction of texts of simple language and vocabulary into circulation, the fact that only minimal numbers of texts could be printed a day, and that the majority were written in Latin, only exacerbated the situation further. Up to the mid-14th century, reading was firmly established as an occupation of the elite in society. Gutenberg’s printing press is thought to have been developed around 1439, and consisted of moveable components (such as letters and punctuation) being placed in a certain order and then pressed onto a sheet of paper.17 Although initially arranging the order of the moveable pieces was a long process, the arrangement could be kept and 14
Lucien Febvre and Henri-Jean Martin, The Coming of the Book: The Impact of Printing 14501800, p.30 15 Heather Whipps, “How Gutenberg Changed the World”, Live Science, May 26th 2008, https://www.livescience.com/2569-gutenbergchanged-world.html 16 Roy Strong, The Story of Britain, p. 150 17 Wikipedia. 2018. “Printing Press.” Last modified 22nd December 2018.
used again, resulting in a system of mass printing.18 Naturally, this meant that the skill and time required to print a book drastically decreased, explaining why the machine was regarded as such a huge innovation at the time. The system was so effective that ‘by 1500, printing presses in operation throughout Western Europe had already produced more than twenty million volumes.’19 However, the significance of an invention cannot be merely based upon the extent to which it increases efficiency and lowers expense, it also must be seen in the impact that it has upon society as a whole.
Bibles] had been pre-sold before he’d even set the last page’.23 The invention of the printing press had therefore come at a time when society was ready to harness the power that increased knowledge and information would give them. Nevertheless, it is not for fulfilling the wishes of an emerging middle class in society that explains why Gutenberg’s printing press went down in history. It is the impact that the printing press had on the uneducated, ordinary people that really explains why it is an invention still remembered today.
Importantly, by the time that Gutenberg invented his printing press, there was a growing desire for easier access to books and texts. This can be seen in England where there was much frustration felt by ‘a growing, literate middle class, who had limited access to the written word.’20 This middle class had grown since the introduction of universities in the 13th century, and it was these institutions that required more manuscripts as they developed in size and influence.21 It was William Caxton who brought the art of printing to England in 1476, and the success of his new business can be seen in the fact that ‘as many as four or five hundred copies [of his first book, The History of Troy] were printed.’22 Similarly, back in Europe ‘every copy [of Gutenberg’s first set of Latin
The availability of cheaper books meant, as Roy Strong states, ‘The knowledge books contained could...be spread far wider, reaching new audiences, as more people than ever before learned to read’.24 The ‘new audiences’ refers to people of ranks and status below the ‘literate elite’ who had never had access to this sort of thinking previously.25 This meant that 15th century Europe was able to boast of a population overall more cultured and educated than any in hundreds of years. It was the types of book being printed which did the most to achieve this. When books had been regarded as rare and valuable objects, the monasteries had placed far more emphasis on the copying of religious texts, such as the Bible and certain prayers. However, the rapidity with which books could be made by the printing press meant that people had
18
22
Heather Whipps, “How Gutenberg Changed the World”, Live Science, May 26th 2008. 19 Wikipedia. 2018. “Printing Press.” Last modified 22nd December 2018. 20 Heather Whipps, “How Gutenberg Changed the World”, Live Science, May 26th 2008. 21 Lucien Febvre and Henri-Jean Martin, The Coming of the Book: The Impact of Printing 14501800, p.29
Roy Strong, The Story of Britain, p. 154 Heather Whipps, “How Gutenberg Changed the World”, Live Science, May 26th 2008. 24 Roy Strong, The Story of Britain, p. 150 25 Wikipedia. 2018. “Printing Press.” Last modified 22nd December 2018. 23
the opportunity to expand upon the range of texts being printed. For example, William Caxton printed a variety of ‘romances, school textbooks, a phrase book for travellers, lives of the saints, a history of England and, the most famous of all, Chaucer’s Canterbury Tales.’26 This meant that people were able to experience a broader range of contemporary views and opinions about society, rather than just experience those focused on religion. This in turn ‘stimulated and spread new ideas quicker than ever.’27 The most vivid example of this is the ‘explosion of the Renaissance movement’ which occurred from 1300 - 1600; ‘a fervent period of European cultural, artistic, political and economic “rebirth” following the Middle Ages.’28 The only way that these ideas were able to spread was through the literature and media being churned out by the printing press at the time. Similarly the Protestant Reformation, inspired by Martin Luther’s Ninety-Five Theses, was only able to spread when he created ‘multiple copies [of his ideas] to hand out elsewhere.’29 It can easily be argued, therefore, that the printing press did not just play a role in making access to education available to all, it also made it possible for some of the most significant historical events of the early modern era to take place.
‘Although calendars, maps, time-tables, dictionaries, catalogues, textbooks and newspapers are taken for granted at present...they continue to exert as great an influence on daily life as ever they did before,’ meaning that we owe the way in which we live now to an invention created in the 15th century.30 Printing is a powerful way of spreading thoughts and ideas, and will be the means of allowing people to make their own journeys of discovery for many years to come. Bibliography Wikipedia. (2018). Printing Press. https://en.wikipedia.org/wiki/Printing_pres s F, Lucien. and Martin, H. (1997). The Coming of the Book: The Impact of Printing 1450-1800. London: Verso Strong, R. (1998). The Story of Britain. London: Pimlico Whipps, H. (May 26th 2008). How Gutenberg Changed the World. Live Science https://www.livescience.com/2569gutenberg-changed-world.html
For this reason, Gutenberg’s printing press can be classed as one of the most significant inventions in modern history. However, it is a creation that still affects us today.
History.com Editors. (21st August 2018) Renaissance. HISTORY https://www.history.com/topics/renaissanc e/renaissance Eisenstein, E. (1979). The Printing Press as an Agent of Change. Cambridge: Cambridge University Press
26
29
Roy Strong, The Story of Britain, p. 154 27 Heather Whipps, “How Gutenberg Changed the World”, Live Science, May 26th 2008. 28 History.com Editors, “Renaissance”, HISTORY, Last modified 21st August 2018, https://www.history.com/topics/renaissance/renaiss ance
Heather Whipps, “How Gutenberg Changed the World”, Live Science, May 26th 2008. 30 Elizabeth Eisenstein, The Printing Press as an Agent of Change (Cambridge: Cambridge University Press, 1979), p. 17
THE JOURNEY OF THE NHS Maya Patel (WHS) The National Health Service was a plan that had been brought about post World War 2 to allow free healthcare for anyone; despite wealth or status. Aneurin Bevan was the health secretary of the government and proposed the idea of a good healthcare system that would be available to all despite the poverty after the war. The first hospital to be opened which offered free healthcare, was the Park Hospital in Manchester, on the 5th July 1948. For the first time all areas of healthcare were free; from doctors, to pharmacists and opticians, and they were all brought together under one big umbrella organisation that offered free service at the point of delivery. The NHS was based on new principles which included being wholly funded from taxation of the people; a fundamental part of the idea being that taxation varied depending on income, where the wealthy would pay more and the poorer pay less. Another principle was that any person, a resident of the UK or otherwise, could be entitled to the NHS for care. This now meant that the NHS were truly open to anyone with no exclusions, foreigner or otherwise. While everyone wanted the NHS scheme to work, having begun in 1948, there were still shortages of food, fuel and an inflated economy. After the havoc wreaked on the buildings from all the bombings, there was not enough materials around to re-build, and whatever there was, went towards schools and houses. Despite only having few consultants spread over counties and the only hospitals that were open could only be in the big cities, pharmacology was booming with business. With new post war technology, ‘Antibiotics, better anaesthetic agents, cortisone, drugs for the treatment of
mental illnesses such as schizophrenia’ had become available to the public. [1] 10 years later, in 1958, James Watson and Francis Crick had transformed the research of disease by understanding and discovering the structure of DNA and how the genetics present had the information to create different structures, proteins and chemicals. This was one of the first steps that helped the NHS proceed with great medical advances that over time would combat many diseases and eradicate many pathogens. In 1968 the measles vaccine was introduced to children. The NHS were preventing the 500,000 deaths of children that were occurring every year previous to the vaccine coming out. Surgery was also advancing with the NHS and in the same year, the first heart transplant took place. Although the patient died only 46 days later, it was the first major organ transplant with only 6 more heart transplants being done for the next 10 years. It wasn’t until better medication was brought out so that the immune system would be suppressed, and the new organ tissue wouldn’t be broken down due to its foreign markers, that transplant operations became more regular. [2] In 1980 the first keyhole surgery took place. The surgeon made minor incisions and then ‘used a telescopic rod with a fibreoptic cable to remove a gallbladder’ [1]. This procedure was the first successful instance in keyhole surgery, which was a great step for the NHS because the procedure is a form of surgery that requires only minimal access to the body. Minimal access surgery means that smaller incisions are made on the body rather than larger ones during surgery. While the technique is difficult to learn, the opportunities it presents to a patient would mean a shorter recovery time because there would be less damage to heal from and therefore a vacancy of hospital beds [3]. Since the 2000s the NHS have made great clinical advances in genetic and stem cell research. Robotics, that having also made an entrance to the surgical theatre, have
increased the financial stress on the NHS. The Da Vinci robot was the first surgical platform that allowed surgeons to operate by remote control; which meant that surgery had now become more precise. The downside of the new machine was that it cost ÂŁ2 million per machine, and the NHS had one in nearly every hospital since 2001. The exorbitant price of the robot is matched by the benefits it brings; this being a faster recovery rate which then leads to a shorter hospital stay. There is a reduced risk of infection and a lower need for blood transfusion when using the machines, therefore less money is spent in treating individual patients [4]. Moving forward, the NHS is partnering with clinicians to study the human genome and work on personalized medicine. The concept of personalized medicine is that by using diagnostics to study the DNA in people specifically, rare diseases can be predicted before they occur and then treated specific to the person it is affecting through their DNA [5]. New technology means that this idea will come into fruition sooner rather than later. This process has already begun with CRISPR- Cas9, a device that allows genome editing inside the body. Cystic Fibrosis is a chronic disease caused by a single mutated gene that leads to the airways inside the lungs being blocked by mucus and trapped bacteria, causing chronic inflammations and making it hard to breathe.[6] An experimental trial for the treatment of cystic fibrosis, a severe genetic mutation inside the lungs, was carried out by the device so that the mutated genes could be corrected. Since WW2, the NHS has made many advances in all areas from research to pharmaceuticals and medical procedures. The NHS has made big changes since when it started to now, with new technology and insight that leads medicine to become more precise in its diagnosis and its treatment as it progresses.
Bibliography [1] Rivett, Geoffrey National Health Service History [2] A. Hunt, Sharon (2016) The changing face of heart transplantation [3] Columbia University Irving Medical Centre (2019) Center for advanced surgery [4] Eddy, Ben About the Da Vinci Robot [5] NHS Personalised medicine
JOURNEYS OF DISCOVERYHOW ARE VACCINES DEVELOPED? Sara Lyden (OHS) One of the most important lines of defence for the population’s health is the vaccine. Vaccines play a vital role in modern life, but how are they developed and how can we ensure they are safe? The first step in developing a vaccine involves finding the antigen of the disease and understanding it; this is the exploratory stage which often takes between two to four years to complete. The antigen is found by inducing an immune response, and in order to do this, the pathogen is grown and harvested. Cell cultures are used to grow a viral sample, while bacterial samples are grown in a nutrient medium for optimum yield (while maintaining the antigen’s full characteristics and structure). Once the antigens are understood, the vaccine concept must be decided. The four most common vaccine concepts are; live-attenuated vaccines, inactivated vaccines, toxoid vaccines and subunit /recombinant /conjugate/ polysaccharide (specific part of pathogen) vaccines. In the future, it is hoped that DNA vaccines and recombinant vector vaccines can be used as well. The vaccine is then created. For the initial stages of vaccine development, viral and bacterial pathogens are approached slightly differently. Small quantities of a viral pathogen are grown into many different cell types, for example cell lines or chicken embryos, both of which have rapid cell reproduction. Bacterial vaccines on the other hand are grown in a bioreactor, this process very similar to the fermentation of yeast. The antigens are then manufactured within bacteria (or sometimes yeast). At this point, scientists have antigens within cells and they need to be isolated. Scientists aim to obtain as many viral and bacterial antigens as possible, while excluding all the growth mediums in the composition of the cell. Once the antigen has been obtained, it must be purified. Procedures such as chromatography and ultrafiltration can be used, this is particularly important if the
antigens were initially grown from recombinant proteins. This process sometimes inactivates the antigen. After that, an adjuvant must be added. An adjuvant is used to enhance the antigen’s immune responses, it is nonspecific. Sometimes stabilisers or preservatives are also added in order to safely use multiple-dose vials of the vaccine. The components of the vaccine are mixed together, inserted into syringe packages, sterilised and labelled. Sometimes vaccines can be freeze dried and then rehydrated when needed. Before the vaccine is widely used, it is tested vigorously, and the manufacturing practice standards are refined. Before phase I of clinical testing, pre-clinical testing must take place. As part of the final pre-clinical stages, some vaccines are tested on animals, so that the human cellular or immune response can be predicted. Once the vaccine has been developed in a preclinical setting, it is introduced into a clinical development phase. The clinical trial phases are divided into three phases; phase I, phase II and phase III. Phase I clinical trials are the earliest trials conducted on humans, with only a small handful of volunteers receiving the vaccine. In this phase, controlled trials, for example placebo vaccines are not commonly used as they can compromise the evaluation of the safety of the vaccine, when the main purpose of phase I is to ensure the vaccine is safe, and the correct dosage is calculated. Phase I is less randomised because the vaccine needs to be tested on many demographics of people, for example different ages or population groups as they may react to the dosage, safety of the vaccine or even route of administration differently. It is common for researchers to take blood samples or check liver function once the vaccine has been administered to record a baseline result for future trials and evaluation of the vaccine. It is important to note that the initial recipients of the vaccine are very closely monitored and the administration of the vaccine is staggered, so that if a problem occurs not all of the volunteers are put at risk. The difference between phase I and phase II is that phase II is larger scale, more randomised and more controlled. Phase II aims to confirm the immunogenicity (ability to provoke an immune response) of the vaccine, as well as proving the overall safety of the vaccine. Phase II trials also aim to have established; an
accurate dose calculation, the correct route of vaccine administration (as shown in figure 1, below) and the interval between doses (if required). Before moving into phase III, the results from phase II are studied carefully, and once the success of phase II has been confirmed, phase III may commence, in order for this to happen, phase II is often repeated multiple times with different demographics so the vaccines characteristics can be completely understood.
Once the vaccine has been tested and has passed all of the clinical phases, it is licenced. After the vaccine has been licenced, it begins another journey through its role in defending the population’s health. It is important to note that while the vaccine has been licenced, it is monitored periodically for its safety and effectiveness.
Bibliography http://www.euvaccine.eu/vaccinesdiseases/vaccines/stages-development , date accessed 13/02/2018 by European vaccine initiative https://www.historyofvaccines.org/content/ho w-vaccines-are-made , date accessed 13/02/2018 by the college of physicians of Philadelphia
Figure 1, by WHO Intranasal vaccines are also another route of vaccine administration, for example some flu vaccines. Like oral administration, no needle is required. Phase III is the final round of testing: a largescale trial designed to collect data on the vaccine’s efficiency and safety. Phase III trials are generally conducted as randomised doubleblind controlled trials as this design controls other variables affecting disease risk. Double blinding (where both the patient and the researchers or medics do not know who is being administered the real vaccine or the placebo) is essential as it removes bias which maximises the chance that the difference in the outcomes of the two groups (controlled group and the test group) are determined by the vaccine, making this an effective method to evaluate it. The control group and test group are randomised for statistical analysis as there may otherwise be a bias, however, occasionally certain demographics are randomised within themselves, for example age or geographical location to obtain a more rounded overview of the vaccine. In some circumstances, other testing methods can be used in phase III, such as; open studies, observational studies and case control studies.
https://www.vaccines.gov/basics/types/index.h tml , date accessed 13/02/2018 by US department of health and human services https://www.historyofvaccines.org/content/arti cles/vaccine-development-testing-andregulation , date accessed 13/02/2018 by the college of physicians of Philadelphia https://www.who.int/biologicals/publications/t rs/areas/vaccines/clinical_evaluation/035101.pdf , date accessed 13/02/2018 by WHO https://www.vaccinestoday.eu/stories/how-arenew-vaccines-developed/ , date accessed 13/02/2018 by Gary Finnigan https://vaccine-safety-training.org/route-ofadministration.html , date accessed 13/02/2018 by WHO http://vk.ovg.ox.ac.uk/vaccine-development , date accessed 13/02/2018 by the Oxford vaccine group
THE JOURNEY OF TREATING INFECTIONS Emily Kress (WHS) Antibiotics as we know them have been available for less than 100 years, and they make treating bacterial infections seem quite easy. That is rapidly changing as antibiotic resistance is steadily increasing across the globe, and whilst many researchers are busy looking for brand new ways to fight bacteria (such as genetically modified viruses called bacteriophages), others are having a look back into the past to find new solutions using old knowledge. There are many ways of treating infection which stem from hundreds or even thousands of years ago - some of them being widely used until relatively recently. These treatments are now being re-evaluated and researched as a form of combatting the evergrowing problem of antibiotic resistant ‘superbugs’. In fact, in the same year of Fleming’s fortuitous development of penicillin (1928) he was warning us about the dangers of the abuse of antibiotics and how it will lead to multidrug resistance. Now that we are seeing these effects with acceleration, doctors and researchers are looking for alternative treatments for infection. One of the avenues of research is into nutraceuticals. Nutraceuticals are essentially natural foods or nutrients that have a positive effect on health, such as Yakult, vitamin C, and Manuka honey. Throughout history, many serious diseases and infections could not be treated and would result in death. However, for less serious infections many cultures turned to natural remedies. These would generally be simple recipes added to food in the form of a spice mix or taken as drinks and teas. The most commonly used ingredient in these ‘elixirs’ was honey. The first accounts of the medicinal use of honey is traced back 8000 years to Stone Age cave paintings. From 3000 years ago, there are accounts of honey being used in Ayurvedic medicine to treat indigestion, insomnia, and cataracts. Possibly the most prevalent use of honey is in the ancient Egyptian time period,
when honey was utilised for nearly every malady in the book. Honey was essentially the ancient form of an antimicrobial sanitizer; healing wounds, treating burns, decreasing the severity of diarrhoea, and generally fighting nearly every infection it came into contact with. There are several physical and chemical properties of honey that make it one of the best natural treatments. One of the most useful properties of honey is that it has a shelf-life of thousands of years. This is due to the presence of a variety of enzymes - most importantly glucose oxidase which turns glucose into hydrogen peroxide and gluconic acid. Hydrogen peroxide has a natural pH of around 4.5 (acidic). This quality is the one of the main reasons honey is so good at fighting infection - the acidic environment prevents any further bacterial, fungal, or viral reproduction. Honey contains about 80% sugar; therefore, the water potential of honey is very low - certainly lower than that of a bacterial or fungal cell. This means that when in contact with the pathogen, water will leave the cell by osmosis travelling from a higher to a lower water potential. This draws most if not all of the water out of the cell, leaving it unable to perform any metabolic reactions and therefore unable to replicate. Modern experiments have shown that honey has the ability to kill wound bacteria at a rate that challenges antibiotics, but it has not been shown to be able to treat infections inside the body. Second only to honey, ancient cultures used bloodletting as a source of treating an infection. This treatment was used up until the mid 20th century as it showed some effectiveness in the early stages of infection especially when a bacterium requires iron (supplied by red blood cells) for reproduction. However, this practice was mostly discontinued in the 17th century and is believed to be the cause of George Washington and Charles II’s deaths. Bloodletting is still very controversial today and rarely used, and whilst leeches are being used more commonly in microsurgeries, the practice has no place in the fight against infections. An older treatment that doctors are rethinking is using maggots to clean flesh wounds. During the Napoleonic wars, a doctor noticed that soldiers with maggot infested wounds were healing better than most others. He then developed this into maggot debridement therapy. The maggots that are used
in this therapy do not eat the human flesh but rather the bacteria. These bacteria are then digested and the maggots secrete an enzyme which is a natural disinfectant and stimulates the wound to heal. This therapy was approved by the US regulators in 2004 and has been tested on the MRSA (methicillin-resistant Staphylococcus aureus) which has proven to be quite effective as in around half of the cases, the need for amputation of a limb was eliminated. Another treatment which is being researched further is the use of silver for wound dressings, sutures, and even endotracheal tubes. In the 16th century, a Swiss doctor applied silver directly to wounds or given orally. It was seen by other doctors that this was an effective way of treating infection and so other doctors started to follow suit. An obstetrician in Germany treated infants with eye drops for the prevention of ophthalmia neonatorium which was adopted by the US and their use was required by law for the majority of the 20th century. Also in the 20th century, silver was used in surgical wound dressings as well as sutures. Silver is very effective at fighting infection as it attacks microbial membranes and binds to the DNA destroying it, therefore it is very difficult for the bacteria to become resistant-even deliberately in the labs. Silver cream is routinely used to cover burns to prevent infection of the tissue as it regrows. The use of silver is mostly safe however, it also is not fit for treating infections inside the body. Serum therapy was used in the 1925 diphtheria epidemic. It took the blood from animals such as horses which had antibodies to the bacteria that had developed due to exposure. They would then inject this serum into humans with the hope that the antibodies would kill the bacteria and stop the infection. This form of treatment was largely replaced by antibiotics but during the Ebola epidemic in 2014, was reconsidered as a way to control the outbreak using serums from the blood of people who had survived the infection. It is clear that the discovery of antibiotics changed the face of fighting infections. However, with the number of multidrug resistant bacteria growing, we need to research and find ways of improving the ways of treating infection that existed in abundance before antibiotics. And whilst we typically think of discoveries as always moving us into the future, many times the journey of
discovery makes us take a look back and rethink the wisdom and practices of past generations. Bibliography http://www.aldokkan.com/science/herbal_rem edies.htm https://www.medicalnewstoday.com/articles/2 64667.php https://www.ncbi.nlm.nih.gov/pmc/articles/P MC3758027/ https://www.ncbi.nlm.nih.gov/pmc/articles/P MC2702430/ https://www.healthychildren.org/English/healt h-issues/conditions/treatments/Pages/TheHistory-of-Antibiotics.aspx https://health.howstuffworks.com/diseasesconditions/infectious/10-ways-that-doctorstreated-infections-before-antibiotics1.htm https://theconversation.com/in-a-world-withno-antibiotics-how-did-doctors-treatinfections-53376 https://www.fda.gov/downloads/AdvisoryCom mittees/CommitteesMeetingMaterials/Medical Devices/MedicalDevicesAdvisoryCommittee/ GeneralandPlasticSurgeryDevicesPanel/UCM 522971.pdf https://www.nature.com/news/modifiedviruses-deliver-death-to-antibiotic-resistantbacteria-1.22173
DISCOVERING P53 Claudia Preston (OHS) In the 1970s cancer and oncogenes were merely a theory. The idea that a protein could be behind the aberrant growth and division of cells was fathomable to scientists, but the field of research was minimal and fairly unimportant. Arnold Levine and Lionel Crawford published a paper proving that what little research had been done thus far had the wrong objective, sparking a growth in this field of research. The journey to discovery of p53, the key to the mechanism behind cancer, is still paramount to scientists today. 1979, the discovery of p53came about as a result of research into the immunology of cancer and the stem of viruses. Part of the discovery was fueled by identifying 6 characteristics of all cancer cells. The cells in tumours divide uncontrollably because of an internal force, their division cannot be stopped by the usual methods and they cannot be annihilated by mechanisms that typically remove mutated cells. Furthermore, studies suggested that cancerous cells could divide infinitely, they can establish and support their own blood supply and, finally, they are not stationary, possessing the ability to form metastases across several organs and tissues. p53 performs a duty in all 6 traits listed. However, before scientists were able to recognise the importance of the p53 gene in the cell cycle, the first ‘onco-genes’ had to be uncovered. Chickens had been used for chemical trials in the early 1900s, the virus that caused leukemia in this organism had been named in 1908. However the discovery was of no great significance to cancer researchers of the time. Except for Peyton Rous, who studied medicine at Johns Hopkins in Baltimore; discovering a sarcoma (when cancer infects connective tissue) in chickens. This sarcoma could be used to infect healthy chickens through the injection of a filtered extract from the tumour; once again, however, relevant researchers dismissed
his findings, meaning it was 60 years before he won a Nobel Prize. Nonetheless, at the annual Gordon conference in 1970 a ‘bookish’ young scientist, Steve Martin, made an announcement. He had successfully isolated the gene responsible for turning the cells of Rous’s sarcoma experiment delinquent. The first oncogene, commonly known as Src (‘sark’), after the tumour it causes. Two more established scientists in San Francisco earned themselves a Nobel Prize in 1989, after the revelation that unaffected typical cells of the chickens, contained an almost identical gene to the Src gene found in the virus. Michael Bishop and Harold Varmus found that this was true for other bird species. Genes tantalisingly similar to the Src gene were detectable in everything from fruit flies to mammals, suggesting it had a long evolutionary history and clearly played a crucial role in explaining the unexplainable, cancer. Scientific discoveries were prolific throughout the 1970s and 1980s, since the discovery of Bishop and Varmus, a plethora of intellectual questions were being generated regarding socalled oncogenes. Primarily, was there a possibility that they could instigate cancer without the assistance of a virus? Evidence of this came promptly, from multiple labs; viruses that did not contain a known oncogene were still able to cause cancer. Possibly because these viruses attack key regulatory aspects of DNA, inhibiting control of cell division and growth, in the host cell. Another major scientist in this exploration was Bob Weinberg, leading a lab at MIT who carried out an experiment using mice cells treated with chemicals, turning them cancerous. This mutated DNA was removed from cells and injected into unmodified mouse cells in a petri dish, transforming them into cancer cells; proving a virus was not needed for the transmission or instigation of cancer. Mutations were enough to turn would-be oncogenes harmful. Following this groundbreaking discovery, the research community became infatuated with ideas that oncogenes impacted the natural mechanism for controlling the division and growth of cells. The actual discovery of p53 was fairly prosaic; a lab full of microscope slides, data collections and stacks of scientific papers, and took place across 3 labs, in London, New Jersey and New York. Whilst labs made the sighting at more or
less the same time, there are 2 names which are most commonly associated with the field, Arnie Levine, based at Princeton University and David Lane at the Imperial Cancer Research Fund (ICRF) in London. Both departments were working under the title of Oncogenes, their discoveries independent but the foundation of both was a virus known as SV40. This virus had been the starting point of an array of previous biological investigations, offering the perfect platform to begin research into the complex structure and mechanisms of cells. SV40 infected specific monkey species, however it did not cause disease, only becoming of great scientific interest following discoveries of its presence in contaminated polio vaccines. Fundamentally, SV40 was able to transform ordinary cells into cancerous forms. David Lane arrived at the ICRF at the point when the viral gene responsible for turning SV40 into an oncogene had been classified as ‘Large T antigen’. Lane used his history in immunology and basic principle of how the immune system removed and ‘captured’ non-self-particles ready for phagocytosis, in order to remove feasible quantities of Large T antigen. The final piece to the puzzle. This complex antibody, thought to be specifically designed to remove the wanted protein, also extracted another protein, however, one weighing 53 kilodaltons. No number of repeats produced the antigen on its own; they came as a pair. At first the finding was dismissed as contamination; simultaneous work of Arnie Levine proved that p53 rewinds the ‘evolutionary clock’ to a stage when the cell’s instinctive nature is to grow and divide continuously, as is seen in embryonic development. The theory was supported by large quantities of p53 also present in embryonic cells. Rapid expansion of biological knowledge into the structure of DNA, in the 1980s, left scientists ready to delve deeper into the function of p53. Through a process of elimination three groups concluded that oncogenes override the controller of cell division. The oncogene Ras was discovered, further testament that within human cells there are genes that have a certain job, but if mutated, can cause harm. Under some circumstances the Ras gene was permanently ‘switched on’, driving cells to infinite division, completely disobeying any natural checkpoints during
division. Whilst advances were being made into the concept of p53 clones possessing the ability to turn cells immortal, Levine’s team at Princeton were conducting experiments to prove the same hypothesis, except they had no dramatic results. Their assumption: they had a faulty clone. This faulty clone, however, when sequenced repeatedly, had one different nucleotide to all the other p53 clones - this one nucleotide made it completely devoid of any oncogenic activity. Only those with mutant p53 were seeing the dramatic biological results, the “wild-type” of the gene was doing the complete opposite. This put scientists on the home straight. p53 was not an oncogene, instead it was acting as the brakes when a mutation occurred. A momentous paradigm shift was witnessed following Steve Friend’s research into Retinoblastoma and findings that there was a constant competition between oncogenes and tumour suppressors in the process of tumour formation. It also proved that these proteins were not only there to act either as oncogenes or tumour suppressors, they have mundane roles to play in cells, however if corrupted they have the power to cause cell proliferation. p53 is not an oncogene, it performs the same role as the retinoblastoma gene, preventing tumour development, making it a tumour suppressor. Only when mutation occurs do tumours take form, there is a loss of the protein’s ability to bind to the DNA in mutated cells to perform the task of DNA repair, in order to prevent cancer. Hence why scientists continue to look into p53, desperately trying to discover whether manipulation can reverse the devastating effect of mutation in one nucleotide in the amino acid sequence of p53. Bibliography Jiang, Lijing, (2012). The discovery of p53 Protein Arizona State Univeristy (Soussi, 2010) The history of p53 Paris: Université Pierre et Marie Curie (Todd Riley, 2008) Transcriptional control of human p53-regulated genes (Meek, 2009) Tumour suppression by p53: a role for the DNA damage response? (Daniel Menendez, 2009) The expanding universe of p53 targets
THE PSYCHOGEOGRAPHY OF THE CONTEMPORARY IMMIGRANT EXPERIENCE Kaitlin Wallace (WHS) A brief exploration of the informal representation of the immigrant experience ‘Chaotic disarray’ descends upon the borders of Australia following the unprecedented arrival of 900 refugees and asylum seekers in October 2018. At its antipode, a white man in a white collar reads ‘Europe’s migration crisis: Could it finish the EU?’1, as headlined by a particularly cynical BBC article published in June of 2018. The Times elaborates: ‘An increase in the number of people from Middle Eastern and African countries taking advantage of relatively calm seas and forgiving weather to cross the Channel… has caused political vapours.’2 There appears to be no notes of satire belying their blasé claims that, naturally, thousands of asylum seekers are habitually exploiting the seas to impose their design of havoc onto Britain; a portrait of the Spanish Armada. I particularly enjoyed the rather tragic senecan take on the political impacts of the refugee crisis; it perfectly engenders the incursion of hysteria and disorder on the British public. However, this begs the question why recipient countries are perpetually painted as victims of the geopolitical phenomenon of the refugee crisis. The paradigm of them bringing chaotic disarray and political upheaval onto us serves to set up damaging dichotomies of cultural identity that pervade the contemporary immigrant experience. They are categorised: refugees - of war, economic ruination, climate change - forced migrants, asylum seekers, this is their prescribed identity in the Western public eye, one which lies awkwardly at odds with a sense of national identity that stabilises one within the structures of a society or culture. A refugee camp is a temporary settlement to house displaced individuals. The idea of displacement is a crucial one; it is a transitory space, an indeterminate gulf betwixt and
between the unyielding margins of state borders that geography demands. Upon reading a 2003 exposé in the Independent written by a man named Will Self, entitled ‘an Identity Crisis at US Immigration’ it becomes blatantly clear that immigration and customs is a place that defies rules of social order; a place that refuses to recognise the integrity of an individual. It is a place of surveillance and scrutiny, a place where people are merely collateral in the official legal proceedings of ‘post 9/11 America.’3 Reading Salmon Rushdie’s The Satanic Verses, which provides a controversial yet highly polemic parody of a ‘neo-colonialist’ 4 immigration system reveals the ambivalence of identity – it is "about migration, metamorphosis, divided selves, love, death, London and Bombay.” 5 The chaos that ensued; the fatwa and the protests as a result of this novel is a real life testimony to the potentially hostile cultural divides which make up the fabric of humanity; the ‘divided selves’, the ethnic estrangement enshrined in the contrast of ‘London’ and ‘Bombay’. Milan Kundera has called this phenomenon "the unbearable lightness of being" 6- the condition of being unanchored in any stable structures or attachments, or a kind of permanent existential suspension7. The liminal space of the immigration office or a refugee camp is poignantly symbolic of the threshold identity that comes with transitioning from a native and familiar cultural realm and the need to integrate into a new and foreign society, a plight served with a bitter concoction of detachment and nostalgia. Far from intending to demonise recipient countries, there is no doubt a need to consider the social and political cruxes at the heart of the crisis. It is of course important to adjudicate the situation and regulate with authorial power. An element of sympathy is buried beneath the magnitude of the administrative task, echoed in promises to alleviate the crisis by bringing in humanitarian aid. Notwithstanding, one may question – does such a rejection of refugee groups as evident in the drive to tighten EU and UK borders reflect proportionately the concerns for the economic health of the state, or is there a minute suspicion of nationalism rearing its ugly head? Nonetheless, the sheer scale of the crisis renders an interpersonal response extremely challenging to enact. This fact, however, is unfortunately enemy to French
novelist Patrick Chamoiseau’s plea for human compassion in the immigration process. He poignantly writes that ‘Aucune doleur n’a de frontières!’8– Pain has no borders! Interestingly, Chamoiseau seeks to tear down the iron curtains7 ; not just the physical barriers and distance between citizens and refugees and migrants, but the socio-psychological, cultural and ethnic boundaries that separate the ‘them’ and ‘us.’ It is a universal call to arms to humanise the victims of economic, political or environmental displacement and allow them a place of refuge with an authentic identity. 1. 2.
3.
4.
5. 6.
7.
8.
‘Europe’s Migration Crisis: Could it Finish the EU?’, BBC, Katya Adler, Europe Editor, 28 June 2018 ‘The Times view on the reaction to the migrant crisis: Missing the Boat’ in which they expressed that ‘Ministers should respond to the increase in migrant arrivals by sending people home when they have no claim to be here. Deploying the Navy will achieve nothing.’ Published December 31 2018 ‘Psychogeography #4: Identity Crisis at US Immigration’, 2003, Will Self expressed the prevalence of racial stereotyping in post 9/11 America, which gives insight into some of the political tensions and prejudices which play a role in the immigration system S Sharma, 2001, ‘The Ambivalence of Migrancy’ refers to the stringency of (largely) western forces in the refugee crisis as a form of ‘neo-colonialism’, echoing the sentiment of nationalism which I posited early in the essay Sanjay Subrahmanya, 2009, ‘The Angel and the Toady’, The Guardian Milan Kundera, 2000, ‘The unbearable Lightness of Being’, exploring the security of identity/ contemporary ‘being’, Juxtaposing experiences in Prague, Geneva, Thailand and the United States Eva Hoffman, 2016, ‘Some thoughts on the psycho-geography of Europe's free movement’, University College London European Institute Patrick Chamoiseau, 2017, ‘Frères Migrants’
JOURNEY THROUGH THE HEALTHCARE INSURANCE SYSTEM IN GERMANY AND IN THE UK. Leslie Lee (WHS) Compared to any period in the past, people in the 21st century are extremely occupied with the idea of well-being. The rapid rise in the value of health among the population is leading to a dramatic increase in consumption of health products including multivitamins and proteins in recent years. This naturally leads to a rising interest in the healthcare system - what country provides you with the best treatment if you are in need? We are living in an era in which the state of the health care system plays an influential role in politics as it has come to the forefront of the population’s attention. ‘We send the EU £350 million a week, lets fund our NHS instead’ was one of the major slogans in 2016, convincing people to vote to leave the EU during the referendum back in 2016. This naturally leads to the global spotlight on the best national healthcare system, where National Health Service provided by the UK is seen as one of the big achievements many other countries envy, and the so-called hybrid system of US healthcare is increasingly being criticised. Today, different countries have unique systems of managing healthcare for their people and OECD countries (organisation aiming to stimulate economic progress and world trade) are investing on average 9% of their GDP in their healthcare systems. GERMANY Germany was the first to start off the national social health insurance system in modern history. Otto von Bismarck, a ruler for Germany and European affairs (1860s-1890), introduced a ‘Sickness Insurance Law’ in 1883. This insured people applicable to the medical cost. The essence of it was to provide health care on a local basis and the cost was divided
between the employees (⅔) and employers (⅓). However, at the time of introduction, this policy was aimed at people in labouring industries, who do manual work. Although this ‘Sickness Insurance Law’ only attributed to less than 10% of the population at that time, it provided a turgid base which allowed later expansion, and it is the base of Germany’s social health insurance today, GKV. After the second world war, Germany was divided into two countries in which the two adapted separate health care systems. Western Germany was provided by both independent and public providers although the majority of people (88%) was under the system founded by Bismarck. The health fund spending mostly came from compulsory and voluntary contributions to statutory health insurance and general taxation, making up 81% of the total fund. After the reunification, western Germany’s system took over eastern Germany’s and a number of reforms have taken place until today. Today, Germany is under operated under a dual public-private system. This allows individual to decide between the GKV (government health scheme) or PKV (private health insurance cover), or both (PKV to top up GKV), depending on the individuals’ income. An individual in GKV pays around 8% of their income to a non-profit insurance company, a sickness fund. Their contribution to the health insurance contribution is proportional to its income (14.6% of gross income), however, in the private sector, it depends on the risks. The five structural principles which characterize the social health insurance in Germany (GKV) are: 1. Solidarity 2. Benefits in kind; immediate treatment and no upfront payment 3. Financing from employers and employees 4. Self-Administration 5. Plurality; patients have a decision among hospitals and private providers Many countries compliment Germany’s universal healthcare system. This system gives the benefits of providing diverse healthcare services and having no upfront payment. The health insurance has wide coverage, where allocating a person for support after surgery is
not seen as unusual. Moreover, it is seen as cost-effective. However, the government holds less control of the health expenses as they are managed by the individual insurance companies, which could be seen as a drawback. UK The NHS (National Health Service), is the universal healthcare system in the UK. The journey of the NHS started in 1948 by Aneurin Bevan (Minister for Health 19451951), much later than Germany. Sir Beveridge’s idea in 1942: ‘Medical treatment covering all requirements will be provided for all citizens by a national health service’, lead to a start to the NHS a few years later. The three core principles of the NHS at its launch were, essentially: 1. Meets the needs of everyone 2. Free at the point of delivery 3. Based on clinical need, not ability to pay The NHS underwent many improvements over a long period of time: in 1974, NHS England was reorganised by bringing together the services administered by hospitals and the local health authorities. With further advances in its services, NHS has celebrated its 70th birthday in 2018, in which Theresa May proposed extra £20 billion a year by 2023 as a celebration. Funding for the NHS comes from general taxation and national insurance contributions. Free healthcare treatment is offered to UK residents at the point of delivery, however, small charges are made for some services. Today it employs around 1.5 million and holds a national pride. Yet, it is confronting an increasing number of economic, political and social challenges. It requires a large proportion of tax (£114.6bn for 2018-19) for NHS England to run its dayto-day services. Brexit plays an influential role: NHS heavily rely drugs importation from the EU and the hospitals, even individuals, feel the need to stockpile drugs. Moreover, a large proportion of the NHS workers are from the EU, indicating potential staff shortages. Social challenges facing the NHS include growing elderly population, putting NHS under stress; On average, 65-year-old costs 2.5 times
more than 30-year-old and for 85-year-old, it costs more than five times as much. Germany and the UK are considered as top countries in terms of their healthcare system. However, despite the compliment they are receiving, both NHS and Germany’s healthcare system needs improvement in many respects. In the UK, 10% of patients are forced to wait more than 2 weeks for the first appointment with a specialist. Sometimes, patients are diagnosed with extreme illnesses such as cancer, where catching diseases quickly are vital for a high chance of surviving. Lack of beds in hospitals leads to delay in operation, increasing anxiety to the patients and their family. This is not an exception in Germany, where long waiting lists are unavoidable under GKV. However, both the NHS and GKV are on their journey to enhance their existing system. From 2013, people under Germany’s health scheme can go straight to a specialist, no need to be referred by a GP. This saves time, which is more than vital to many of the patients. Similarly, for the NHS, it is attempting to reducing the waiting times by guaranteeing GP access 7 days a week and appointments within 48 hours for people over 75. Likewise, small steps are taken over the course of history which brought to their position today. Over the course of 70 years, NHS has grown and is growing, and so are numerous national healthcare systems across the globe where many try to adopt each other’s schemes. This is an ongoing journey of not only doctors, nurses and patients, but also of social, economics and politics. Bibliography https://www.nhs.uk/using-the-nhs/about-thenhs/the-nhs/ https://en.wikipedia.org/wiki/Healthcare_in_G ermany https://www.dr-hempel-network.com/healthpolicies-in-india/german-healthcare-systemspecial/ https://www.expatica.com/de/healthcare/health care-basics/the-german-healthcare-system-aguide-to-healthcare-in-germany-103359/ http://www.oecd.org/health/healthsystems/Health-Spending-Latest-TrendsBrief.pdf
was swift in the UK. In July 1959 the first successful deceased donor renal transplant in the UK was performed by Peter Raper, a urologist in Leeds.
JOURNEYS OF DISCOVERY: ORGAN TRANSPLANTATION Libby Westwood (OHS) Organ transplantation is widely known as being the best option for terminal, irreversible organ damage, but it wasn’t until 1954 that the first ever organ transplant was carried out successfully. Somewhat surprisingly, the phenomena of organ transplantation is a relatively recent discovery, with the majority of milestones being crossed in the past 40 years. Transplantation is a fine art, with exact blood matches and immunosuppressants needed amongst many other factors, for the chance of a successful transplant. The first widespread interest in organ transplantation followed Austrian-German surgeon, Erwin Payr’s development of vascular sutures which facilitated the transplantation of organs. The first kidney transplant came after experimental procedures had been carried out on animals. In 1902, a renal xenotransplant (a transplant across species) between a goat and a dog was unsuccessful, and many other failed xenotransplants led to the excitement about the possibility of human-human transplants dying down. Nevertheless the interest was resumed in the 1930’s, when the first human-human transplant was completed. Despite its failure, due to mismatches between donor and recipient blood, ultimately leading to organ rejection, it led to many other discoveries which would eventually lead to a successful renal transplant and meant surgeons were in a place, by 1954, from which they could carry out the operation without any foreseeable obstacles. The first ever successful human kidney donation was performed by Joseph E. Murray at Peter Brent Brigham Hospital in Boston. In this operation, a kidney was donated from one identical twin to the other, this early success was reliant on the assumption that the twins had the same blood type. Following Murray’s success, progress
Thomas Starzl was an American physician who specialised in organ transplantation, he is often referred to as the “Father of Modern Transplantation”. He is renowned for the world’s first liver transplant in 1963, the first ever successful liver transplant in 1967 and the first simultaneous heart-liver transplant in 1984. Liver transplantation has progressed at a rate which could never have been imagined by Starz: from the first liver transplant which was unsuccessful (the paediatric patient died within the operation due to bleeding which could not be controlled by Starzl); to current successful liver transplants. Living donor liver transplantation has emerged in recent decades as treatment for end-stage liver disease. Endstage liver disease can take the form of cirrhosis and hepatocellular carcinomas caused by prolonged alcohol misuse or Hepatitis B and C infections. Living donor liver transplantation allows a quantity of the donor’s liver (55-70% in a typical adult) to be surgically removed and transplanted into the recipient whose infected liver has been removed in its entirety. The livers of both the donor and recipient grow back to full size approximately three months after the surgery, which facilitates the living donation. A significant development occurred in 1967 when South African surgeon Christiaan Barnard performed the first human heart transplant from a 25 year old woman in a state of brain death following a car accident, into a 55 year old man dying from heart disease. Despite the patient only surviving 18 days after the transplant, a further transplant Barnard did just a month later meant a patient lived for a further 2 years. Barnard’s clinical heart transplantation stimulated world-wide fame, and many surgeons attempted the procedure. However, because many patients were dying quickly following the transplantation, the number of heart transplants dropped from 100 in 1968, to just 18 in 1970. Surgeons recognised that the major issue with the heart transplant was the body's natural tendency to reject the new tissues. Before any further advances could be made, progress would have to be made with immunosuppressant drugs and tissue typing (the assessment of the immunological compatibility of tissue from separate sources
prior to organ transplantation). The greatest advancement is thought to be Jean Borel’s discovery of cyclosporine an immunosuppressive medication which is extracted from soil fungi. Due to this, heart transplant recipients now have a 85-90% chance of surviving a year following the operation. There are also many devices and treatments which can help improve the chance of a transplant being successful and bridge the wait for a heart to become available, such as a LVAD (left ventricle assist device) which is a mechanical pump that pushes blood to the rest of the body.
who is most eligible for a certain available organ, in the US, due to their healthcare system and the absence of the NHS they have the United Network for Organ Sharing (UNOS) which uses similar criteria to the NHS to decide who will receive a transplant. Whilst progress has been made over the last century in transplantation, in the future physicians will need to strive to perform fail-safe organ transplants in which survival rates continue to rise and there are better postoperative outcomes such as shorter recovery time, less scarring et cetera. Bibliography
Across the world, the most commonly transplanted organs are kidneys, followed by the liver, and then the heart. Corneae and musculoskeletal grafts are the most commonly transplanted tissues; these outnumber organ transplants more than tenfold. There is one organ which is notably missing from organs that can be donated - the brain. Neither the brain, nor the head as a whole has yet been transplanted into humans, but experiments have been carried out on animals. Robert J. White, an American neurosurgeon is best known for his head transplants on dogs and monkeys from 1965, but with mixed results. The animals lived between 6 hours and 3 days following the operation. These experiments were heavily criticised as being barbaric by animal rights activists, and unethical by many other commentators. White became a target for protestors as a result of his head transplantation experiments, one particular protestor interrupted a banquet in his honour by offering him a bloody replica of a human head. Others called his house asking for "Dr. Butcher". This leads to the question; even if physicians and researchers find a way to overcome the difficulties that come associated with a brain or head transplant (notably the inability of scarred tissue to transmit electrical nerve impulses) should they be performed, or is there a limit to what we should be transplanting? There are currently 6,000 people on the UK transplant waiting list, but the harsh reality of transplantation is that not everyone will receive their much needed organ. In 2016 in the UK, there were 400 people who died waiting for an organ to save their life. There are strict rules and grading systems in the NHS which regulates
NHS Choices, NHS, www.organdonation.nhs.uk/. 50 Years of Heart Transplant - Timeline British Heart Foundation.” – How Do Pacemakers Work – How Are They Fitted – British Heart Foundation, British Heart Foundation, www.bhf.org.uk/informationsupport/heartmatters-magazine/medical/50-years-of-hearttransplant/heart-transplant-timeline. Dr. Robert J. White (1926–2010).” Resuscitation, vol. 83, no. 1, 2012, pp. 18–19., doi:10.1016/j.resuscitation.2011.08.011. Fricker, Janet. “Thomas Starzl.” Bmj, 2017, doi:10.1136/bmj.j1806. “History.” UNOS, 6 Apr. 2018, unos.org/transplantation/history/. “History Of Renal Transplant.” Renal Medicine: History Of, www.renalmed.co.uk/historyof/renal-transplant. “Tissue Typing.” NeuroImage, Academic Press, www.sciencedirect.com/topics/immunologyand-microbiology/tissue-typing. “UNOS.” UNOS, unos.org/.
THE DISCOVERY OF DNA Jess Lee (WHS) ‘Everyone is different.’ It’s a phrase that everyone has heard before. Each individual is different, in height, hair colour, eye colour, likes, dislikes; no two people are exactly alike. Now, here’s a fact you might not have heard before: Every human is 99.9% genetically similar to the next human. The two statements seem almost contradictory, but both are true. We have such a huge amount of DNA, that the 0.01% difference is immensely significant. Although there are other major factors, the variations in our genomes plays a big part of making us individuals. DNA, or deoxyribonucleic acid, is a large molecule that contains our unique genetic code. It was first brought to light in the 1870s by Johann Friedrich Miescher, who was a Swiss physician and biologist. During his research of the composition of white blood cells (lymphoid cells), Friedrich Miescher was able to isolate nucleic acid. Lymphoid cells were found in great quantities in the pus of infections, so he collected pus-coated bandages from a nearby medical clinic. He carried out experiments with salt solutions varying in pH, and he discovered an unfamiliar substance that behaved differently to the proteins that Miescher was acquainted with. When an acid was added, the substance separated from the cell solution, but it dissolved again once an alkaline solution was added. He named the substance ‘nuclein’, as it came from the nucleus. Miescher’s experiments showed him that the molecule nuclein was made up of hydrogen, oxygen, nitrogen and phosphorus, and that there was a distinctive ratio of nitrogen to phosphorus. Miescher himself believed that nuclein was a molecule of heredity, however he had trouble promoting and communicating his discoveries to the wider scientific community, and it was years before his work was truly appreciated. The next discovery was of the nitrogenous bases that are key in DNA structure: adenine, thymine, cytosine, guanine and uracil. Albrecht Kossel was a German biochemist and a pioneer
in the study of genetics. In 1881, Kossel identified nuclein as a nucleic acid, and gave it its chemical name, deoxyribonucleic acid. From 1885 to 1901 he, together with his students, used hydrolysis to chemically analyse nucleic acids, and they managed to discover and isolate the 5 different organic compounds present in DNA and RNA. Albrecht Kossel received a Nobel Prize for Physiology or Medicine in 1910. In 1944, Oswald Avery, an American bacteriologist, identified DNA as the molecule responsible for heredity. Avery, for a long time, experimented with pneumococcus, the bacteria that causes pneumonia. He continued the research started by Frederick Griffith, who discovered that if a live and harmless form of pneumococcus was mixed with an inert and harmful form, then the harmless bacteria would become deadly. Avery, along with Maclyn McCarty and Colin Macleod, soon found that the substance responsible for this transformation was DNA. The wider scientific community had believed that proteins were substances of heredity, as there are 20 different amino acids that could build up a protein molecule, but only 4 nucleotide bases in DNA. They did not think it was possible for such a simple molecule to be responsible for such complex information. Therefore, the result that Avery and his colleagues published was received with scepticism, and many people chose to believe that there must be a series of proteins present in the DNA molecule. There was one person, however, who instantly embraced Avery’s theories. In 1950, Erwin Chargaff, an Austro-Hungarian biochemist, discovered that in any species, the amount of thymine was equal to the amount of adenine, and the amount of cytosine was equal to the amount of guanine. Therefore, this demonstrated that the number of purines (A + G) was equal to the number of pyrimidines (C + T). These became to be known as Chargaff’s rules. In 1951, Rosalind Franklin was researching at King’s College London, using X-ray chromatography. She was able to produce two images of DNA, and one of the two showed a definite helical structure with two clearly visible strands. Franklin was so close to solving the structure of DNA, however Maurice Wilkins, with whom Franklin had a bad relationship, took the X-ray image to James Watson and Francis Crick. They finally solved
the puzzle and beat Franklin to publication. They announced their discovery in the journal ‘Nature” in 1953, and Wilkins, Watson and Crick were awarded the Nobel Prize for Physiology or Medicine in 1962. Rosalind Franklin’s critical work was not appreciated or honoured until very recently. Since then, there have been many more discoveries that have developed our understanding of DNA, such as DNA sequencing and genetic diseases. In 1996, Dolly the sheep was cloned from an adult cell, demonstrating that even when DNA had specialised, it could still be used to create another entirely new organism. In recent years, there has been more and more research done into genome editing. Genome editing is a type of genetic engineering that allows scientists to change the DNA of an organism. One approach to this is called CRISPR-Cas9, which is a method that was adapted from a naturally occurring genome editing system in bacteria. Researchers would create a small piece of RNA that acts as a guide sequence, and this would attach to the specific sequence of DNA that needs editing. The same RNA would also bind to the Cas9 enzyme, which would then cut the DNA at the target location. Scientists would then use the cell’s own repair mechanisms to add or remove genetic material. We are yet to see if this technology is safe and effective in humans, however genome editing could possibly become a way to prevent or treat genetic diseases, such as cystic fibrosis, sickle cell disease and Huntington’s disease. In less than 150 years our knowledge and understanding of DNA and how it works has increased immensely. It has been argued that the development of our appreciation of the structure and functioning of DNA has been the most important development in science of the last century. Due to the discovery of DNA, our ability to diagnose diseases early on has improved significantly, and it has also been extremely important the field of forensic science. DNA will continue to revolutionise and evolve the fields of medicine, forensics, paternity, and with the discovery of CRISPRCas9, the future ahead for the field of genomics could not be more exciting.
Bibliography https://www.britannica.com/biography/Albrec ht-Kossel https://www.khanacademy.org/science/highschool-biology/hs-classical-genetics/hsintroduction-to-heredity/a/mendel-and-hispeas https://www.nature.com/scitable/topicpage/dis covery-of-dna-structure-and-function-watson397 https://www.businessinsider.com/comparinggenetic-similarity-between-humans-and-otherthings-2016-5?r=US&IR=T https://www.sdsc.edu/ScienceWomen/franklin. html https://ghr.nlm.nih.gov/primer/genomicresearc h/genomeediting http://www.sbs.utexas.edu/herrin/bio344/lectur es/lecturespdf/Background/hSection7.pdf https://compbio.imascientist.org.uk/question/w hy-is-everyones-dna-different/ https://www.revolvy.com/page/AlbrechtKossel https://www.famousscientists.org/erwinchargaff/ http://www.macroevolution.net/erwinchargaff.html
A MISSION TO MOSCOW Miranda Dorkins (OHS) Mission To Moscow is the name of a 1943 film directed by Michael Curtiz and produced by the Warner Brothers based on the 1941 memoirs of Joseph Davies, the US ambassador in the USSR 1936-38. Since its release it has, in the words of Ronald Radosh in A Great Historic Mistake: The Making of A Mission to Moscow “accurately gained the reputation for being unadulterated Stalinist Propaganda.” The film depicts the journey of Davies through Europe and to the USSR where he records his impressions of Soviet Life, politics, foreign policy and important events such as the Moscow Show Trials. The film was made in 1943 which is important in understanding its very pro-Soviet stance. It’s purpose was to familiarise the American Public with their new war ally and to justify some of the USSR’s more scandalous actions, like the show trials. The movie, made in a faux-documentary style which enforces the nature of this film as propaganda, focuses on the journey of Davies and his family. It depicts their physical journey from the USA to the USSR but more importantly their journey from sceptics of communism to converts. Davies’ book on which the film was based was met with great success when it was published in 1941, selling 700,000 copies and being translated into 13 different languages. And his book was as much an example of pro-Soviet Propaganda as the film. It came out only three weeks after Pearl Harbour, so to the confused American citizens now being told to fight alongside their previous enemy, the book appeared to give Davies’ inside story and the truth. For the American people, according to a Gallup poll, the most important insight provided by Davies in the book was his judgement on the three Purge Trials that took place in Moscow between 1936 and 1938, and the film exaggerates this importance even further. The Moscow Trials were a series of show trials held, from 1936-1938, following the
installation of Stalin as the Premier of the USSR. The purpose of the trials was presented as to rid the USSR of Trotskyists and any opposition from the Right to the Communist Party. They were part of Stalin’s greater campaign to rid Russia of his opponents known as the Great Purge, when 600,000-1,200,000 people were killed. The defendants were Old Bolshevik party leaders and high up officials in the Soviet Secret Police. They were being accused of conspiring with foreign, western powers to assassinate Stalin and other Government officials, in order to destroy the communist infrastructure in the Soviet Union and reinstate capitalism. Most shocking and confusing to the observers was that all of the defendants confessed to the crimes they were being accused of and pleaded guilty, in spite of the prosecution not being able to produce any actual evidence against them. So it is not surprising that there was much doubt and mystery surrounding the veracity of the trials (now we know that the trials were completely manufactured by Stalin and that the confessions came as a result of the drugging and torture of the defendants). At first Davies thought, correctly, that the defendants’ testimonies were “untrue.” That they had been forced to confess to some sort of political fiction not treason. However, four years later when re-reading his diary entries of the time Davies changed his mind. He believed that he had been mistaken in looking at the “dramatic struggle for power between Stalin and Trotsky,” and the political unrest in the USSR rather than the guilt of the defendants. This led him to insert a lengthy passage into his discussion of the 1938 trials, where he justifies Stalin’s actions as he claims that the trials were necessary in order to destroy the Nazi fifth columnists who had infiltrated the USSR at the time. In other words Davies was now spreading the idea that the trials were not an attempt to consolidate Stalin’s political power (which they were) but a successful campaign to, “protect itself (USSR) not only from revolution within but attack from without.” The film amplified this presentation of the show trials even further. The film centred around Davies’ perception of the trials. For the purposes of the film the three trials that took place over three years were conglomerated into one giant and very dramatic trial. This alongside the faux-documentary style had the effect of making the conspiracies and rumours
feel real and adding credibility to Stalin’s phony charges. The film took Davies opinion that the trials served to rid the USSR of Nazi opposition and presented it as true to a massive audience. Thus Hollywood was presenting the trials as just and the defendants as guilty. With the film making such significant claim, it is important to look at how involved the government were in making such an overtly political film. After the war, in the McCarthy trials the House of Un-American Activities Committee (HUAC) would use the film to show how communists in Hollywood had influenced the film, and would try to conclude that FDR himself had ordered the production of the film. There were certain connections between the book and the film and the Roosevelt administration. FDR was pleased that Davies, his friend from the Woodrow Wilson administration, was doing something to encourage the war effort in writing his book. And Sumner Welles, the Under Secretary of State allowed him to use state documents that had previously been confidential, to do so. Furthermore when the President received his own copy of the book it is claimed that wrote, “This book will last.” There were also repeated claims that the film was commissioned by the Roosevelt administration with the direct purpose of warming Americans to the idea of allying with the USSR. These claims were made by many member of the production team including the Warner Brothers themselves. The Warner Brothers claimed that at a dinner at the White House they had been told by Roosevelt that he would fund the film. However they directly denied having even claimed this in the McCarthy trials, and in spite Roosevelt having wanted the public to support the wartime alliance, there is little other evidence for his involvement in the making of the film. Unlike the book the film was not met with popular success. It made a loss at the box office and on the whole its factual inaccuracy and false portrayal of soviet leaders and events meant that it was received very badly. Furthermore the filmmakers’ dishonesty and attempted indoctrination of the public did not go unnoticed. The HUAC would use Mission to Moscow as one of three examples of pro-soviet films made by Hollywood (alongside the North Star and Song of Russia). Some of the production team were prosecuted in the McCarthy Trials and the film will be remembered as one of the most “blatant pieces
of pro-Stalinist propaganda ever offered by American mass media.”
issues in a bid to find solutions. Enter: the coming-of-age novel.
HOW COMING-OF-AGE LITERATURE INSPIRES SELF-DISCOVERY Claire Laurence (WHS) One of the key functions and roles of literature is its ability to reflect reality. Acting as an artistic depiction of the world around us, literature informs not only our interpretation of our surroundings, but also improves, sharpens and enhances our understanding of ourselves. Literature is, as stated by Virginia Woolf in her collection of essays ‘A Room of One’s Own’: ‘attached ever so lightly perhaps, but still attached to life at all four corners’31. Within the pages of a novel, those who feel lost in their life or misunderstood by others can find an accurate representation of their plight, and in vicariously living out the scenario through the characters and scenarios within fiction, the individual in question can come to terms with their own problem and perhaps find a way in which they can resolve it. Beyond the simple power of reading literature, there is an even stronger force at work when one pens one’s own fiction. This is one of the truest forms of selfdiscovery: in distancing oneself from the problems one is facing, one can gain further insight into potential solutions; it is the classic technique of gaining some perspective on one’s own life. And as life around us grows more complex, as we become increasingly aware of various social issues and gain the momentum and ability to speak out against them, more and more literature is written about such social
31
Woolf, V. (1929). A Room of One’s Own. London: Hogarth Press
Defined by Suzanne Hader as ‘the story of a single individual's growth and development within the context of a defined social order’32, the coming-of-age novel or, as it is known more academically, the ‘bildungsroman’, charts the journey into adulthood of one or more characters, complete with all the trials and tribulations of adolescence. The birth of the genre is widely considered to be Goethe’s 17951796 novel ‘Wilhelm Meister’s Apprenticeship’, a book which greatly influenced and inspired first Europe and then the entire literary world. Since then, a considerable number of very famous bildungsromans have been published, read and hailed as great works of literature. What is interesting, however, is the spike in popularity which the bildungsroman saw in the 20th century. There are absolutely some earlier examples: one of the most famous 19th century bildungsromans is Charlotte Brontë’s famous ‘Jane Eyre’, where the central character ‘clearly struggles to define herself by her own terms’33 and take control of her own narrative against those who would seek to dominate it. Voltaire’s ‘Candide’ and Dickens’ ‘David Copperfield’ also stand out as works of literature by authors which dominate the canon. But one need only glance at the Wikipedia article for bildungsromans to discover that from 1901 to 2000, the genre saw an enormous boom in popularity which shows no sign of stopping as we move into the 21st century. From Joyce’s ‘A Portrait of the Artist’ to J.D. Salinger’s ‘The Catcher in the Rye’, and more contemporary works such as ‘The Perks of Being a Wallflower’ (Stephen Chbosky) and John Green’s ‘Looking for Alaska’ (equally famous for one of the best Manic 32
Hader, S. (1996). The Bildungsroman Genre: Great Expectations, Aurora Leigh, and Waterland. 33 Lollar, C. (1997). Jane Eyre: A Bildungsroman
Pixie Dream Girls in all of literature), the bildungsroman has been transformed into a massively successful industry. But what renders such stories so compelling to readers? In part, the success of the coming-of-age story can be attributed to the fact that its themes are generally so universal. Everyone, after all, must mature; everyone must face the horrifying journey from childhood into adolescence, and everyone must emerge the other side, battered and bruised no doubt but all the better for the experience. We love to read literature because it ‘hold(s)… the mirror up to nature’34 and shows us our own lives, perhaps enhanced or improved by the presence of some deus ex machina to swoop in and save the day. The themes present in coming-of-age novels – friendship, love, fitting in and standing out, individuality, bullies and oppression and hardship – are present in everyday life; we recognise them as part of our own cultural experiences. Virginia Zimmerman writes that ‘To come of age is perhaps the most common ground there could be among readers’35, as most people old enough to read novels about sex and smoking are old enough to have perhaps experienced such things themselves, or certainly to be aware of them. Bildungsromans have an inherent appeal because they have universal themes and sympathetic protagonists with whom we can relate and for whom we can feel sorry. More than that, however, the bildungsroman has a wonderful ability to solve problems and prompt self-discovery. Precisely because the themes are so universal, coming-of-age novels enable readers to ‘to encounter feelings, challenges, and relationships they 34
Shakespeare, W. (1906). Shakespeare Complete Works. London: Oxford University Press 35 Kitchener, C. (2017). Why So Many Adults Love Young Adult Literature. Retrieved from www.theatlantic.com
recognize from their own lives’36, and in doing so, experience such feelings, confront such challenges and interact in such relationships without any of the risk associated with doing so in real life. When one experiences life through the eyes of another individual, it not only casts perspective on one’s own life, but also provides one with some idea of how to respond to situations. It may sound basic or infantile, but a character ‘getting something wrong’ in a novel can inform readers that a similar response is likely not appropriate in a real-life setting. Furthermore, readers can accompany characters on their journeys, and once the book is finished, they can contemplate their own experiences more deeply and consider their next steps. Bildungsromans prompt a degree of selfdiscovery not available from literature which is more distanced from us; without the ability to relate to a character, readers cannot necessarily gain insight into their own lives. We like literature because it reminds us of ourselves – that much is true no matter the genre. There is something universal to be found in every book, even the highest of fantasies or most obscure of historical novels. But there is no genre quite as directly universal, which prompts such complete self-discovery, as the coming-ofage novel. It grants the opportunity to deeply meditate on one’s own issues and experiences and come to some conclusion about how to act. Its rising popularity is merely evidence of both how pleasant it is to read but also the effect it has on our lives, and without it, we would perhaps have less empathy for the experiences of others and ourselves.
36
Kitchener, C. (2017). Why So Many Adults Love Young Adult Literature. Retrieved from www.theatlantic.com
DOES THE DISCOVERY OF MICROPLASTICS IN THE DEEP OCEAN MEAN THE PLASTIC SEA WASTE PROBLEM IS MORE SERIOUS THAN WE REALISE? Charlotte Furness (OHS) In 1982, JAMSTEC, the Japan Agency for Marine-Earth Science and Technology, found their first piece of debris on the ocean floor in Sagami Bay, just South of Tokyo. The first confirmed piece of plastic debris found on a mission was in 1983 in Suruga Bay and since then, 35% of debris recorded on their missions has been identified as plastic37. This represents only some of the problems concerning plastics within our oceans. The first synthetic plastic was invented in 1907 by Leo Baekeland and the plastic industry was particularly driven forwards by the Second World War, where substitutes for other less 37
JAMSTEC Deep-sea Debris Database, http://www.godac.jamstec.go.jp/catalog/dsdebris /metadataList?lang=en 38 Laurence Knight, A brief history of plastics, natural and synthetic, BBC News, https://www.bbc.co.uk/news/magazine-27442625 39 Sarah Gibbens, Microplastics found to permeate the ocean’s deepest points, National Geographic, https://www.nationalgeographic.com/environmen t/2018/12/microplastic-pollution-is-found-indeep-sea/ 40 Josh Gabbatiss, Microplastics ‘pose major threat’ to whales and sharks, scientists warn, Independent, https://www.independent.co.uk/environment/mic roplastics-ocean-pollution-whales-sharks-threatplastic-coffee-cups-microbeads-a8194131.html 41 Adam Vaughn, UK government to ban microbeads from cosmetics by end of 2017, The Guardian, https://www.theguardian.com/environment/2016 /sep/02/uk-government-to-ban-microbeads-fromcosmetics-by-end-of-2017
available materials were needed38. Microplastics are small pieces of plastic measuring less then 5mm39. They may occur as a result of larger pieces of plastic breaking down or be manufactured for things such as beauty products. Plastics are more readily broken down into smaller pieces once they enter the sea40. Recently, the UK Government banned the use of microbeads in products41 and while this is a step forwards, it does not solve the looming hazard these microscopic pieces of plastic pose. The Great Pacific Garbage Patch located between California and Hawaii in the Pacific Ocean is three times the size of France and is almost entirely made up of plastics, yet only 6% of these are larger than 5mm42, or in other words, 94% of the plastic that makes up this giant floating mass of debris is microplastic. The plastic is transported there by circulating ocean gyres, that help to contain the plastic in its formation once it arrives there43. The plastic from Asia takes one year to join this evergrowing mass, and it takes 6 years for plastic from North America to arrive there44. An estimated 70% of marine debris sinks to the bottom of the oceans45, which implies that this large collection of floating debris doesn’t even begin to scrape the truth about the amount of plastic within our oceans.
42
Surfer Today, Great Pacific Garbage Patch: Everything you need to know about the giant plastic island, https://www.surfertoday.com/environment/1428 3-great-pacific-garbage-patch-everything-youneed-to-know-about-the-plastic-island 43 Oliver Milman, ‘Great Pacific garbage patch’ sprawling with far more debris than thought, The Guardian, https://www.theguardian.com/environment/2018 /mar/22/great-pacific-garbage-patch-sprawlingwith-far-more-debris-than-thought 44 Greg Wiszniewski, Clean Our Oceans: The Impact of the Great Pacific Garbage Patch, BB Cleaning, https://www.bbcleaningservice.com/great-pacificgarbage-patch.html 45 Mark McCormick, How the Great Pacific Garbage Patch is Destroying the Oceans and the Future for Marine Life, One Green Planet, http://www.onegreenplanet.org/environment/gre at-pacific-garbage-patch-is-destroying-the-oceans/
The Mariana Trench in the Pacific Ocean contains the deepest known points on Earth, of up to 11,000m46. A study of microplastics in the Mariana Trench found there were up to 13.51 pieces of microplastic per litre of seawater, and within the sediment this increased to a maximum of 2200 pieces per litre47. Another study estimated that that there was up to 236,000 tonnes of microplastics made up from 51 trillion particles within the oceans, yet this was only representative of 1% of all the plastic that entered the ocean in 201048. If you consider the 8 million tonnes of plastic waste that gets added to our oceans annually49, then it is easy to see how this problem could get further and further out of hand, causing more and more damage to the environment and to ourselves. In 2016, the Okeanos Explorer investigated the Mariana Trench and found that 17% of the plastic it encountered down there had had some contact with marine life50. Plastics are a threat to marine life on a number of different levels. They are ingested by wildlife directly as they mistake it for food, which in turn leads to less space available in their stomach for actual food, leading to unintentional starvation as the plastics make them feel full. Plastics can also
wreak havoc on the inside of wildlife if they have sharp edges that can pierce the gut and internal organs51. Both of these can lead to death, which in turn will affect the whole food chain with unknown consequences. It is estimated that 99% of all seabirds will have ingested plastic by 205052, which could have unprecedented consequences on the seabird population and their food sources lower down the food chain. Filter feeders such as whales ingest lots of microplastic when they filter seawater for food, in both the water itself and within the stomachs of its prey53. Plastic can also be ingested indirectly as well, which could lead to a build-up of plastics within the stomachs of the top predators. Microplastics also pose a threat to human health as well. In 2017, 83% of global tap water samples contained plastic54. A study found that microplastics were particularly common in oysters and mussels, with an average European diet exposing the consumer to 11,000 microplastics annually55 and another found that all mussels from UK coastlines and supermarkets contained at least some microplastic56. We are more likely to ingest microplastics from mussels as they are filter
46
https://news.nationalgeographic.com/2018/05/pl astic-bag-mariana-trench-pollution-science-spd/ 51 Hannah Ritchie and Max Roser, Plastic Pollution, Our World in Data, https://ourworldindata.org/plastic-pollution 52 Chris Wilcox, Erik Van Sebille and Britta Denise Hardesty, Threat of plastic pollution to seabirds is global, pervasive and increasing, PNAS, https://www.pnas.org/content/112/38/11899.abs tract 53 Josh Gabbatiss, Microplastics ‘pose major threat’ to whales and sharks, scientists warn 54 Matthew Taylor, Plastics found in stomachs of deepest sea creatures, The Guardian, https://www.theguardian.com/environment/2017 /nov/15/plastics-found-in-stomachs-of-deepestsea-creatures 55 Lisbeth Van Cauwenberghe and Colin R. Janssen, Microplastics in bivalves cultured for human consumption, Science Direct, https://www.sciencedirect.com/science/article/pii /S0269749114002425 56 Josh Gabbatiss, All UK mussels contain plastic and other contaminants, study finds, Independent, https://www.independent.co.uk/environment/mu ssels-plastic-microplastic-pollution-shellfishseafood-oceans-uk-a8388486.html
Becky Oskin, Mariana Trench: The Deepest Depths, Live Sciencehttps://www.livescience.com/23387mariana-trench.html 47 Peng, X., Chen, M., Chen, S., Dasgupta, S., Xu, H., Ta, K., Du, M., Li, J., Guo, Z., Bai, S. (2018) Microplastics contaminate the deepest part of the world’s ocean. Geochem. Persp. Let.9, 1–5. https://www.geochemicalperspectivesletters.org/ article1829#Fig3 48 Eric Van Sebille et al, A global inventory of small floating plastic debris, IOP Science, https://iopscience.iop.org/article/10.1088/17489326/10/12/124006;jsessionid=123F0E078E457FC 6106D3ACEE956F209.c3.iopscience.cld.iop.org#erl aa0a15s3 49 Earth Day Network, Fact Sheet: Plastics in the Ocean, https://iopscience.iop.org/article/10.1088/17489326/10/12/124006;jsessionid=123F0E078E457FC 6106D3ACEE956F209.c3.iopscience.cld.iop.org#erl aa0a15s3 50 Sarah Gibbens, Plastic Bag Found at the Bottom of World’s Deepest Ocean Trench, National Geographic,
feeders and we eat them whole, unlike fish, which are gutted prior to eating. However even our fish sources contain plastics. A study in Californian markets found that a quarter of the fish contained plastic in their guts57. The effects of the ingestion of microplastics upon our own health is unknown and whether the plastic remains within us or simply passes through still is unclear58. Plastic sea waste also poses a social and economic problem. Damage by marine debris to the marine industries of the 21 countries in the Asia-Pacific is estimated to cost around $1.26 billion annually59. The debris that enters our oceans also washes back up on our beaches and shoreline, which can be devastating to tourism-based economies, as one study estimates that 10 pieces of debris per metre of beach will deter 40% of tourists and that New York state lost up to $2 billion from tourists as a result of unclean beaches60. With increased plastic in the oceans, it’s more likely that more and more will end up on our shorelines, causing damage to economies of seaside towns. Sea graves and historical shipwrecks are now being contaminated by plastic waste, some of which still contain bodies that are yet to be recovered61, the issue of plastics is now becoming an increasingly complex and moral one- that the graves of those who died at sea are being polluted and desecrated by our own hand. The fact that microplastics have permeated to the deepest depths of our oceans, that we know where only 1% of the plastics within our oceans are62 and that the effects of microplastic on human health are unknown63 suggest that the problems caused by plastic waste within the ocean is much more serious than we realise. By adding millions of tonnes of plastic to our oceans each year and not knowing the full
extent of the consequences of our actions or implications to our own health or that of the environment, we may be contributing to what will be in the future a serious health crisis. With every piece of plastic ever made still existing64 and more being created, it seems that we are heading towards a crisis point not only for our own health, but for the health of the entire planet.
57
61
Centre for Biological Diversity, A Global Tragedy for Our Oceans and Sea Life, https://www.biologicaldiversity.org/campaigns/oc ean_plastics/ 58 Josh Gabbatiss, Microplastics ‘pose major threat’ to whales and sharks, scientists warn 59 Alistair McIlgorn, Harry F. Campbell and Michael J. Rule, The economic cost and control of marine debris damage in the Asia-Pacific region, Elsevier, https://pdfs.semanticscholar.org/9931/a1ed30b06 18ed8cf353d3c56bf5f99afd126.pdf 60 Debris Free Oceans, Marine Debris Hurts Human Health, Marine Life, and South Florida’s Economy, http://debrisfreeoceans.org/marine-debris
Bibliography JAMSTEC Deep-Sea Debris Database. http://www.godac.jamstec.go.jp/catalog/dsdebr is/metadataList?lang=en Knight, L. (2014). A brief history of plastics, natural and synthetic. BBC News. https://www.bbc.co.uk/news/magazine27442625 Gibbens, S. (2018). Microplastics found to permeate the ocean’s deepest parts. National Geographic. https://www.nationalgeographic.com/environ ment/2018/12/microplastic-pollution-is-foundin-deep-sea/ Gabbatiss, J. (2018). Microplastics ‘pose major threat’ to whales and sharks, scientists warn. Independent. https://www.independent.co.uk/environment/ microplastics-ocean-pollution-whales-sharksthreat-plastic-coffee-cups-microbeadsa8194131.html Vaughn, A. (2016). UK government to ban microbeads from cosmetics by end of 2017. The Guardian. https://www.theguardian.com/environment/20 16/sep/02/uk-government-to-ban-microbeadsfrom-cosmetics-by-end-of-2017 Surfer Today. Great Pacific Garbage Patch: Everything you need to know about the giant plastic island. Sarah Knapton, Ocean plastic now polluting shipwrecks and desecrating war graves of servicemen lost at sea, warn divers, The Telegraph, https://www.telegraph.co.uk/science/2018/10/17 /ocean-plastic-now-polluting-shipwrecksdesecrating-war-graves/ 62 University of Warwick, ‘Lost’ 99% of ocean microplastics to be identified with dye? https://warwick.ac.uk/newsandevents/pressreleas es/lost_99_of/ 63 Josh Gabbatiss, Microplastics ‘pose major threat’ to whales and sharks, scientists warn 64 Centre for Biological Diversity, A Global Tragedy for Our Oceans and Sea Life
https://www.surfertoday.com/environment/142 83-great-pacific-garbage-patch-everythingyou-need-to-know-about-the-plastic-island Milman, O. (2018). ‘Great Pacific garbage patch’ sprawling with far more debris than thought. The Guardian. https://www.theguardian.com/environment/20 18/mar/22/great-pacific-garbage-patchsprawling-with-far-more-debris-than-thought Wiszniewski, G. Clean Our Oceans: The Impact of the Great Pacific Garbage Patch. BB Cleaning. https://www.bbcleaningservice.com/greatpacific-garbage-patch.html McCormick, M. (2015). How the Great Pacific Garbage Patch is Destroying the Oceans and the Future for Marine Life. One Green Planet. http://www.onegreenplanet.org/environment/g reat-pacific-garbage-patch-is-destroying-theoceans/ Oskin, B. (2017). Mariana Trench: The Deepest Depths. Live Science. https://www.livescience.com/23387-marianatrench.html Peng, X., Chen, M., Chen, S., Dasgupta, S., Xu, H., Ta, K., Du, M., Li, J., Guo, Z., Bai, S. (2018) Microplastics contaminate the deepest part of the world’s ocean. Geochem. Persp. Let.9, 1–5. https://www.geochemicalperspectivesletters.or g/article1829#Fig3 Van Sebille, E., Wilcox, C., Lebrenton, L., Maximenko, N., Hardesty, B.D., van Franeker, J.A., Eriksen, M., Siegel, D., Galgani, F., Law, K.L. (2015) A global inventory of small floating plastic debris. IOP Science. https://iopscience.iop.org/article/10.1088/1748 9326/10/12/124006;jsessionid=123F0E078E4 57FC6106D3ACEE956F209.c3.iopscience.cld .iop.org#erlaa0a15s3 Earth Day Network. (2018). Fact Sheet: Plastics in the Ocean. https://iopscience.iop.org/article/10.1088/174 89326/10/12/124006;jsessionid=123F0E078E4 57FC6106D3ACEE956F209.c3.iopscience.cld .iop.org#erlaa0a15s3 Gibbens, S. (2018). Plastic Bag Found at the Bottom of World’s Deepest Ocean Trench. National Geographic. https://news.nationalgeographic.com/2018/05/ plastic-bag-mariana-trench-pollution-sciencespd/
Ritchie, H., Roser, M. (2018). Plastic Pollution. Our World in Data, https://ourworldindata.org/plastic-pollution Wilcox, C., Van Sebille, E., Hardesty, B.D. (2015). Threat of plastic pollution to seabirds is global, pervasive and increasing. PNAS. https://www.pnas.org/content/112/38/11899.ab stract Taylor, M. (2017). Plastics found in stomachs of deepst sea creatures. The Guardian. https://www.theguardian.com/environment/20 17/nov/15/plastics-found-in-stomachs-ofdeepest-sea-creatures Van Cauwenberghe, L., Janssen, C.R. (2014). Microplastics in bivalves cultured for human consumption. Science Direct. https://www.theguardian.com/environment/20 17/nov/15/plastics-found-in-stomachs-ofdeepest-sea-creatures Gabbatiss, J. (2018). All UK mussels contain plastic and other contaminants, study finds. Independent. https://www.independent.co.uk/environment/ mussels-plastic-microplastic-pollutionshellfish-seafood-oceans-uk-a8388486.html Centre for Biological Diversity. A Global Tragedy for Our Oceans and Sea Life. https://www.biologicaldiversity.org/campaigns /ocean_plastics/ McIlgorn, A., Campbell, H.F., Rule, M.J. (2011). The economic cost and control of marine debris damage in the Asia-Pacific region. Elsevier. https://pdfs.semanticscholar.org/9931/a1ed30b 0618ed8cf353d3c56bf5f99afd126.pdf Debris Free Oceans. Marine Debris Hurts Human Health, Marine Life, and South Florida’s Economy. http://debrisfreeoceans.org/marine-debris Knapton, S. (2018). Ocean plastic now polluting shipwrecks and desecrating war graves of servicemen lost at sea, warn divers. The Telegraph. https://www.telegraph.co.uk/science/2018/10/ 17/ocean-plastic-now-polluting-shipwrecksdesecrating-war-graves/ University of Warwick. (2017). ‘Lost’ 99% of ocean microplastics to be identified with dye? https://warwick.ac.uk/newsandevents/pressrele ases/lost_99_of/
DISCOVERING DEMOCRACY: COLONIAL INVOLVEMENT IN THE POLITICAL JOURNEY OF THE DEMOCRATIC REPUBLIC OF THE CONGO. Ella Desmond (WHS) The Congo now consists of two parts: Democratic Republic of the Congo (DRC or Congo-Kinshasa), and Republic of the Congo (Congo-Brazzaville), however, it was once the Kingdom of Kongo, a free state that had no involvement with European explorers or colonists. Since the 15th century there has been constant intervention of European countries and the Congo has experienced the spread of Christianity, slavery, 3 wars with Portugal, Dutch invasion, battles, the Kongo civil war and colonisation, so it is no surprise that this area and, in particular the DRC as the second poorest country in the world65, is widely viewed as a place of corruption, war and suffering.
labour, dismembering those who did not comply66. This dictatorial regime did not go on long unchecked and British pressure forced the Belgian Parliament into annexing the Congo Free State as a Belgian colony in 1908 - Belgian Congo. The Belgian Congo now came under the legislative authority of the Belgian Parliament and the colonial manipulation continued as administrators exploited the land for natural resources that could provide profit for the Belgian government, using the Congolese to do so. In the height of global political unrest the Force Publique also participated in both world wars in East African Campaigns winning against a German colonial army in German East Africa in WW1.67
In the late 19th century, Belgian exploration into the Congo took place under the sponsorship of King Leopold II of Belgium. He gained the land at the Conference of Berlin in 1885 and so, the exploitation of the Congo Free State, as he named it, began. The indigenous people were forced into slavery and rubber production, up to half the population were killed as a result of disease and mistreatment and King Leopold II used an army, the Force Publique, to enforce
In 1960 the Mouvement National Congolais (MNC) party won parliamentary elections, Lumumba became Prime Minister and parliament established Kasavubu as the first democratically elected President. In June that year the Belgian Congo gained independence and due to the mutiny of the Force Publique, leadership struggled, Europeans fled and the Congolese regained control of their country.68 However, this was not the end of Belgian involvement in what was referred to at the time as Congo-Léopoldville as Belgium, with the help of the USA, removed and replaced the First Prime Minister Lumumba, although some official recognition of the brutal involvement was gained as Belgium was stated to be “morally responsible” for his death in a 2001 investigation by the Belgian Parliament.69 There were subsequently many brief and unsuccessful governments and even a coup supported by the USA as an African anticommunist government70 in the renamed Zaire region. The new president Mobutu became
65
68
J. Gregson, (2017). Poorest Countries in the World. Global Finance 66
J.D. Fage, (1982). The Cambridge history of Africa: From the earliest times to c. 500 BC. Cambridge University Press. p. 748 67
P. Brousmiche, (2010). Bortaï: journal de campagne: Abyssinie 1941, offensive belgocongolaise, Faradje, Asosa, Gambela, Saio (in French). Harmattan
C. N.Trueman, (2015) The United Nations and the Congo. Historylearningsite.co.uk The History Learning Site 69 A.Van De Velde, (2010). Belgians accused of war crimes in killing of Congo leader Lumumba. Brussels: The Independent. 70
Unknown (2019). Democratic Republic of the Congo. [online] En.wikipedia.org. Available at: https://en.wikipedia.org/wiki/Democratic_ Republic_of_the_Congo#cite_note-43
head of state and the leader of a single-party system of government and therefore, although elections were held, it was effectively a ruthless kleptocracy masked in superficial democratic practice. During the Cold War, Republic of Zaire had close political links with the USA who also had a hand in the ending of Mobutu’s regime in the 1990s71.Mobutu had stripped the country of its resources including their infamous blood diamonds which were smuggled out of the country72. Mobutu died in 1997 yet the DRC did not achieve democratic political development as Mobutu’s regime plunged the country into debt and corruption, with civilians affected most harshly. Kabila became president after launching the First Congo War that had removed Mobutu, one year later in 1998 the Second Congo War took place, fuelled by multiple rebel armies all seeking control and during which the president was assassinated with his son Joseph Kabila as his successor73. A peace accord was made, then an election, a riot, another multi-party democratic election held until, finally, in 2006 Kabila was sworn in as president and a constitution devised74.
opposition challenger, and due to the Catholic Church’s statement that the result is not concordant with that indicated by its election monitors77. It is clear that the ruthless colonial and imperial ruling of the DRC has left an incalculable impact on the political systems of the country today. This notion of barbaric avarice introduced by European colonists that exploited the land and people alike for personal gain, can certainly be seen in the corrupt political leaders of the present that are willing to hire rebels and mercenaries to quash political or governmental opposition amongst the people, much like King Leopold II used the Force Publique to control the native Congolese. The DRC’s political journey was hijacked by European settlers who enforced dictatorial regimes and so, the discovery of fair governance and democracy has been a slow and fairly unsuccessful one with a high death toll. Multi-party elections have been introduced yet with corruption and violence instilled from its past, the Democratic Republic of the Congo struggles to become what its name would suggest - a democracy.
Unfortunately, even though a successful democratic election had taken place, conflicts claiming 5.4 million people as of April 200775, rebellions including the military M23 rebellion in 2012/13, protests and more death followed as leaders from countless rebel groups, including those from other countries grappled for power. Moreover, despite his election, Kabila perpetuated the corruption and abuse of power and Human Rights Watch reported that an M23 fighter told their reporter “Many M23 were deployed to wage a war against those who wanted to threaten Kabila’s hold on power”76. Even though 2018 saw another general election for the DRC, there is scepticism concerning the validity of Tshisekedi’s victory as an
Bibliography
71
75
A. M. Mangu (2006) The 2006 CONSTITUTION OF THE DEMOCRATIC REPUBLIC OF CONGO. College of Law, University of South Africa p. 2,3 72 (1997). The last days of Mobutu. The Economist 73 (2016) ICC Convicts Bemba of War Crimes and Crimes against Humanity. International Justice Resource Center 74 Unknown (2019). Wikipedia
Brousmiche, P. (2010). Bortaï: journal de campagne: Abyssinie 1941, offensive belgocongolaise, Faradje, Asosa, Gambela, Saio (in French). Harmattan. Economist, The (1997). The last days of Mobutu. Emerson, J. (2017). DR Congo: Rebels Were Recruited to Crush Protests. [online] Human Rights Watch.
N. Kristof, (2019).Opinion| Orphaned, Raped and Ignored. [online] New York Times, Nytimes.com 76
J. Emerson, (2017). DR Congo: Rebels Were Recruited to Crush Protests. [online] Human Rights Watch. 77
Unknown (2019). Outcry after DR Congo winner revealed. [online] BBC News
Fage, J.D. (1982). The Cambridge history of Africa: From the earliest times to c. 500 BC. Cambridge University Press. p. 748. Gregson, J. (2017). Poorest Countries in the World. Global Finance. Kristof, N. (2019).Opinion| Orphaned, Raped and Ignored. [online] New York Times, Nytimes.com. Luning, P. (2014). A Tale of Two Congos Beyond the Headlines. [online blog post] Beyond the Headlines. Mangu, A. M. (2006) The 2006 CONSTITUTION OF THE DEMOCRATIC REPUBLIC OF CONGO. College of Law, University of South Africa p. 2, 3. Starr, F. (1911). The Congo Free State and Congo Belge. The Journal of Race Development, 1(4), 383-399. Trueman, C. N. (2015) The United Nations and the Congo. Historylearningsite.co.uk The History Learning Site Van De Velde, A. (2010). Belgians accused of war crimes in killing of Congo leader Lumumba. Brussels: The Independent. Wikipedia, Unknown (2019). Democratic Republic of the Congo. [online] En.wikipedia.org. Available at: https://en.wikipedia.org/wiki/Democratic_Rep ublic_of_the_Congo#cite_note-43 (2016) ICC Convicts Bemba of War Crimes and Crimes against Humanity. International Justice Resource Center
parties. You could think of them as the ‘village dogs’ of today.
FROM VISCOUS BEASTS TO MAN’S BEST FRIEND: A STORY OF EVOLUTION. Cara McMillan (OHS) We often try to dissociate our loveable furry friends from the ‘dangerous and wild’ creatures we call wolves. But in reality, wolves and domestic dogs are very closely related species – so much so that, along with coyotes, they can interbreed and produce viable, fertile offspring. The wolf is an established ancestor of the dog, so when did this transition from unapproachable beast to trusted companion take place? The exact time at which dogs began to be domesticated is under debate. It is agreed that grey wolves and dogs diverged from an extinct wolf species some 15,000 to 40,000 years ago, but the point at which they became man’s best friend is unclear. Some theorists believe their primary domestication started when tribes would steal and raise young wolf pups to serve as guard dogs and hunters. However, the more commonly believed theory is that the wolves themselves approached us. When food became scarce, wolves took to prowling around human camps, in the hope of some reward. But only gentler, more obedient wolves would profit, and aggressive ones would be killed. Hence the kinder ones survived. Human faeces were also an attraction for the wolves, which simply increased their exposure to us further. Once we realised that the wolves we had viewed as evil for so long could actually function as animals that were safe to be around, humans introduced breeding of this type - effectively altering evolution. This hypothesis is widely recognised as being the most probable. For the past 7,000 years, dogs have pretty much been everywhere, moving with humans - predominantly joining their hunts for food and scavenging for leftovers. Unlike now, they wouldn’t have had specific owners in the past, but they would remain loyal to the travellers they had attached themselves to - as trust was important for both
Our journey to discovering that we could live harmoniously with these once-frowned-upon creatures is one I enjoy learning about greatly. Domestication of wolves gave us our loyal accomplices and them a change of purpose. Dogs grew increasingly different to their more aggressive relatives – boasting fuller coats, lopped ears and wagging tails. Not only did their physical appearances change, but their mannerisms also adapted to be able to cohabitate alongside humans. They learned (and responded to) our gestures; they even took to actively wanting to please us. Traits like these can only be compared successfully with those of our own infants - even chimpanzees and bonobos (our closest relatives) don’t have the ability to read our gestures as readily as dogs can. Considering this, it is unsurprising that 81% of Americans consider their dogs to be true family members, equal in status to children. With the transition of ‘owners’ to ‘parents’, humans and dogs have evolved a connection unique to any other. In addition to the obvious benefits of forming this special bond, dogs have accounted for numerous improvements in mental health. Studies have shown that keeping active is not just more likely if you own a dog, but also that we are subconsciously triggered to walk for longer depending on the strength of our attachment to our pup. This is in keeping with the idea that having a pet combats loneliness. A dog can act as a distraction from other stresses and serves as a great way of socialising with other dog-walkers and owners. Currently, it seems as if the advantages are endless; but what happens when our love for these animals runs dry, and they no longer sit as our top priority? As incomprehensible as it sounds to most of us, a huge number of dogs are abandoned and mistreated every year. But what is the reason given for 3.6 million of them entering shelters worldwide? When I researched this question, the most common excuse was because of ‘behavioural problems’. To me this seems unjustified - would the same proportion of people abandon a family member based on their behaviour? However, I continued to read with as open mind as possible to try to discover the truth behind it all. A common theme I came across was owners not fully realising the
commitment looking after another being requires. This, accompanied with minimal effort, often leads to a bond not forming between the human and dog; resulting in the initial attraction of having one being forgotten. The sad reality for a lot of these pets is neglect and suffering, which I consider to be equally as unfavourable as abandonment. Forming these relationships with dogs has shaped evolution, and our joint progression has resulted in forming companionships which play fundamental roles in our lives. I came across one story which I think highlights the importance of this. Julie Barton, a former dog owner, was suffering with severe depression. After struggling for too long she decided to seek help. ‘One of the ways to recover from a harrowing depressive episode is to truly feel connection again. You need to feel safe and loved,’ she says. Julie remembers times in her childhood when she would lock herself in her room with the family dog - this was to ensure she had an environment where she wasn’t judged, insulted or hurt. The solution to her pain seemed simple; she acquired a dog. Now Julie says, ‘There is no question that his companionship changed my life.’ Her dog, Bunker, has ‘given her a reason to live’ - which I consider to be the greatest gift of all.
Bibliography Brian Handwerk (2017). How Accurate Is Alpha’s Theory of Dog Domestication? Smithsonian Magazine. Brian Hare (2013), Opinion: We Didn’t Domesticate Dogs. They Domesticated Us. National Geographic. Stanley Coren (2011). Do We Treat Dogs The Same Way As Children In Our Families? Psychology today. PPP Healthcare (2017). 8 physical and mental health benefits of having a pet. Axa PPP Healthcare. Lisa Towell (2018). Why people abandon animals. PETA Julie Barton (2016). How my dog saved me from depression. The Telegraph.