Serpentes Issue 4

Page 1

S ERPENTES T HE A CADEMIC J OURNAL

I SSUE 4 M ICHAELMAS T ERM 2019


1

Welcome back to Serpentes, the Radley College Academic Journal. Before you start reading, let us remind you of our aims. We aspire to change the way academic study is perceived in Radley. Due to the relatively constricted content of GCSEs and A-levels, academics becomes more about ticking a box than thinking beyond it. We would like to change this misconception because there is so much more to academics than what is written in your textbooks. Do you have an interest which you would like to write about? Have you read an especially good book recently? Did you solve or create a challenging maths puzzle? Send it to us and it could be published in the next issue. Whether you are a Sixth former or Shell, this journal belongs to you.

Editors

Alex Senior, Jake Elliot, Jake Hubbard, Jack Dhillon, Albi Tufnell, Muez Khan, Tobias Southwell

Contributors

Thomas Uglow, Peter Denton, Albi Tufnell, George Dring, Alex Senior, Dan PB, Timothy Bracken, Russell Kwok, Tobias Southwell, Yikai Zhou

Don-In-Charge

DLC


Contents The Fermi Paradox - Thomas Uglow . . . . . . . . . . . . . . . . . . . . . . . The banjo is a far superior instrument to the ukulele - Dan Pleydell-Bouverie The ukelele is a far superior instrument to the banjo - Alex Senior . . . . . Should the Federal Reserve be abolished? - Russell Kwok . . . . . . . . . . A dark day for democracy - Peter Denton . . . . . . . . . . . . . . . . . . . . . Nobel Prize for Peace- Albi Tufnell . . . . . . . . . . . . . . . . . . . . . . . . Nobel Prize for Chemistry - Timothy Bracken . . . . . . . . . . . . . . . . . . Nobel Prize for Physics - Tobias Southwell . . . . . . . . . . . . . . . . . . . . Nobel Prize for Literature- George Dring . . . . . . . . . . . . . . . . . . . . .

2

3 6 8 9 13 17 19 23 26


3

The Fermi Paradox - Thomas Uglow

A Dyson Sphere The diameter of the observable universe is about 90 billion light-years long. Meaning light takes 90 billion years to travel from one end to the other. Within this space, there are a recently suggested 2 trillion galaxies, and about 100 billion planets in our galaxy alone (The Milky Way). And within our galaxy, there are estimated to be about 40 billion earth-like planets. It would seem foolish, and perhaps arrogant, to assume that life, even intelligent life has not arisen on at least one other planet in the galaxy, not even considering the entire universe. With this information, one would expect to see hundreds of spaceships zooming across the galaxy on interstellar motorways. But we don’t. For years scientists have tried to explain why we have not encountered even one other alien civilisation, the most prominent scientist being physicist Enrico Fermi, whom this paradox was named after. The paradox has never been answered, but scientists have been able to make some decent attempts which one can only call “best guesses”, as there is absolutely no concrete evidence to base it on. It is worth introducing something called the Kardashev Scale, which helps to group intelligent civilizations into 3 categories based upon the amount of energy they have harnessed. Briefly, a type 1 civilisation can use all the energy on their planet. Humans have not even reached type 1 yet, but according to a formula created by Carl Sagan humans are at 0.7. Type 2 civilisations harness all the energy of their home star. Humans don’t know how one could accomplish this yet, but a super-construction such as a Dyson Sphere may work (Figure 1). Finally, a type 3 civilisation has access to all the energy in their galaxy. This extraordinary feat is beyond imagination, but we know it would take millions of years, granted that the civilisation has already mastered near light-speed travel, and interestingly, it would probably require about as much energy as they seek to get out of galaxy colonisation, making the task seemingly impossible.


4

To return to the original paradox, one explanation is that there simply are no other type 2 or 3 (and high level 1) civilisations. This explanation rejects that all higher civilisations that exist have refused to contact us for whatever reason, as it is mathematically unsuggestable. This cynical answer is simple and makes sense, but at the same time, it is extremely unlikely that there are no other civilisations on our level, to the extent where it is virtually impossible. And so, this calls for another explanation. There is a concept called “The Great Filter”, and it states that at some point between the origin of life and type 3 civilisation status there is a barrier, or a filter, that intelligent species must hit to progress further. The question is, at what point on the species timeline does the Great Filter occur. There are a few options, which the future of the human species depends on. The first, and the most favourable option is that we have passed the Great Filter, and we are one of the only species to have ever passed this filter. This makes the human race extremely rare, and it is the scenario which we would prefer to be in, as it means that in the grand scheme of things the rest of our future is plain sailing until we reach type 3 civilisation status, in the sense that, we have no more evolutionary barriers to break. One obvious question is when exactly did the human race pass this “Great Filter”, and there are a few possibilities. One is that the filter is at the beginning, and that it is highly unlikely that life should begin at all, and that everything else usually falls into place afterwards. It may be that the jump from simple prokaryotic cells to complex eukaryotic cells is extremely difficult, and that most of the universe is just teeming with simple unicellular organisms. The problem may lie within the conditions of planets, and that the particular conditions of our solar system and earth are what causes intelligent life to form, which means that is the margin for error is infinitesimally small, when considering how many earth-like planets there are. This is known as the Rare Earth Hypothesis. It is also possible that we are one of the first of many intelligent civilisations, and that we are essentially universal pioneers on our way to super-intelligence. It may be the case that only recently the universe has become a place suitable for intelligent life to develop, and that we are one of, if not the first to have developed since the big bang. This scenario means we are currently en route to becoming a type 3 civilisation, but it also nicely explains why we have not yet been visited by an advanced super-race of aliens, as they wouldn’t exist yet. If these two scenarios are not the ones in which we currently find ourselves, we can assume that the Great Filter is in fact ahead of us. This means that in the past, intelligent alien civilisations have existed, however they were not able to make it through the great filter, which is why we have never encountered aliens in history. We can also conclude that we are on our way towards the Great Filter, and that the human race will in the future undergo the ultimate test, which as mathematical probability suggests, we will not pass. This scenario does however explain why we do not see hundreds of spaceships flying in the sky at night, because we are sadly, utterly alone. This future does beg the question, “what would the Great Filter be for us?”. It is postulated that all intelligent civilisations, having reached a certain level of technological aptitude, end up destroying themselves with their own technology. One might imagine that Humans would end up annihilating each other using nuclear weapons, or with a new weapon which we are yet to create. It is also possible that


5

global warming will become man’s demise, in the sense that we will end up rendering the Earth uninhabitable. It is worth bearing in mind, that all the attempts to explain the Fermi Paradox are only theoretical, and that so far, scientists have not been able to conclusively explain why we have never been visited by or encountered an intelligent alien species. There are a vast number of possibilities, and one would have extreme difficulty trying to pick out the most plausible scenario. We can only wait to become the visited, or perhaps to even become the visitors.

Enrico Fermi


6

The banjo is a far superior instrument to the ukulele - Dan Pleydell-Bouverie

The Byrds’ masterful use of the banjo was bold and innovative. Image source: Financial Times The banjo is more than a source of musical excellence; it has been integral to the development of the some of the most consequential music of the 20th century.

The banjo has had a profound effect on popular culture and some of the greatest musicians of current and previous generations. The ukulele, in comparison, is a class-less and tacky souvenir. The modern day ukulele has been culturally castrated from its Hawaiian roots. It’s disgusting popularity has been its downfall: anyone with excess self-importance and £20 has access to it. In comparison to the banjo, the ukulele is an earache. I would go as far as saying it is musically inept and redundant. It has far superior counterparts that produce better quality of sound, for example the mandolin. Failing that, a guitar will do the same job and produce a far higher caliber of sound. The primary reason for the remaining popularity of the ukulele is as an incredibly cheap and affordable alternative to the guitar. The banjo, on the other hand, is a specialist, intellectual and more expensive piece of kit. The banjo has had a more profound effect on the other side of the Atlantic where it found itself at the forefront of the highly intellectual “bluegrass” musical movement during the end of World War II. Combining predominantly black blues and jazz with white country music, it was a radical new genre. Just over ten years after the banjo’s bluegrass heydey it was a clear influencer in the biggest musical movement of the 20th century: rock and roll. The banjo features heavily in some of the most


7

quintessential rock and roll albums, such as the Eagles’ highly underrated Desperado and Neil Young’s rather overrated number one album, Harvest (the banjo does have more minor features in Young’s better albums like On the Beach). Not only has the banjo featured in some of the greatest songs in the last century, it has also had a profound influence on playing techniques for leading members of this movement. For me, most notably, Roger McGuinn (lead member of The Byrds) and San Francisco cult icon, Jerry Garcia of the Grateful Dead. Both musicians started on a banjo and McGuinn’s signature finger style is a direct result of his banjo playing roots. Both musicians honoured the banjo by featuring the instrument on tracks from some of their most iconic albums: the Grateful Dead’s “Cumberland Blues” off of their 1970 triumph Workingman’s Dead and The Byrds’, “I am a Pilgrim” on their revolutionary Sweetheart of the Rodeo. This is not to say that the banjo has become irrelevant. Quite the contrary, many modern-day pop stars have attempted to feature it in their songs, with the most valiant attempt going to Lily Allen in her controversial track, “Not Fair”. It is clear that in a present-day popularity competition the ukulele would come out triumphant. Despite being invented over two centuries ago, it clearly speaks for present-day popular music tastes: its tunes are lazy, repetitive, and tinny. The greatest feature I could find for the ukulele in popular culture is in “Ram On”, off Paul McCartney’s RAM. The track lasts less than a minute and it perfectly encapsulates McCartney’s attempt to hang on to the last strains of his ridiculous Beatles fame. Despite this, the ukulele does duet nicely with his rather reedy and pathetic voice. Taylor Swift, Train and Vance Joy are just a few who have abused the ukulele’s joyful tone and reaped the benefits. The banjo is superior in every aspect to the ukulele, except current popularity. History’s favour, however, always lies with the talented.

A Banjo


8

The ukelele is a far superior instrument to the banjo - Alex Senior

Dwayne ’The Rock’ Johnson playing the Ukelele Music is the art form that unifies the seemingly impenetrable ivory tower of high art and the public. It has been proven countless times that human have an innate sense of rhythm embedded in their psyche: thus, accessibility is the essence of music. It makes no sense, therefore, to value a niche instrument limited to the Southern swamps of the US and a couple of pretentious Billy Ray Cyrus wannabes (such as Andy from the US office) over the inclusive livability of the ukulele. Perhaps, in certain contexts, the banjo provides a variability that its 4 stringed counterpart cannot match but in services to music, the ukulele is unmatched. Make no mistake, the ukulele’s contributions to music are not limited, like the banjo, to the confines of the music it has created. Its service to music is found in the artists whom it has nurtured to success. Many artists find that their first instrument was a ukulele: it is hardly surprising given that a one would struggle to find a ukulele in excess of £150 whereas a decent banjo starts at £400. The ukulele breeds a love for the art of music that mould the careers of many famous and successful musicians. In terms of user experience, the banjo provides little contest for the ukulele. Not only can the fundamentals be learnt within an hour, but its compact nature allows for unrivalled portability that imposes few geographical limitations on the user. It is hard to imagine a banjo to be played whilst paragliding, yet this is commonplace for the ukulele. For the experienced user, it provides an adaptability that renders genre subsidiary. There are few songs that cannot be performed on the ukulele. Truly great music is the result of years of labour and mastery. I would argue that is required a superior musician to exploit and create beautiful, complex music with an instrument that many seem as simple and juvenile than using one whose intricacies are readily apparent due to its complex nature.


9

Should the Federal Reserve be abolished? - Russell Kwok

The Federal Reserve - Washington D.C. The purpose of this article is to investigate the fundamental purposes of the Federal Reserve and compare its advantages and disadvantages. The period of greatest economic peace and prosperity is said to be between 1841-1856, which coincided with a lack of any type of central bank. (Davis Weidenmier, 2017) This statistic and many similar others bring to mind the question of the necessity of a central bank and the possibility of diagnosis as a case of over-intervention by the government, causing unnecessary red tape. Therefore, we will be examining the Legality, Constitutionality and Utility of the Federal Reserve to investigate such claims. Legality

The Federal Reserve is, in essence, a public-private partnership with the largest banks in the United States; that is to say, a quango. It differs in that it is not held accountable to the rule of law applicable to every other entity in the United States; it is allowed to counterfeit and alter money at will (with that money becoming legal tender) and manipulate markets. Such favours and benefits are condemned not just by modern-day citizens but also in the ancient times; it is stated in the Book of Proverbs 20:23 that ‘The Lord detests differing weights, and dishonest scales do not please him.’ Their control of the money supply and liquidity also means that they have complete discretion over the economy; in other words, economic planning which has never proved to succeed in any economy before. This also makes them a monopoly and liable to dissolution by the Sherman Antitrust Act of 1890. This is evident in the passage describing the crime: ‘. . . or combine or conspire with any other person or persons, to monopolize any part of the trade or commerce. . . shall be deemed guilty of a felony. . . ’ (Sherman, 1890) With the Federal Reserve being


10

controlled by nationally chartered commercial banks that can elect members, it seems more like a conspiracy of banks than a benevolent government institution. Furthermore, the Federal Reserve is tasked with deciding interest rates. They do so by the large-scale buying or selling of fixed income securities issued by the Treasury from private banks; such manipulation would result in large fines and sometimes imprisonment if occurring in the private sector. Moreover, these actions infer that they, the twelve members, are better than the entire market at deciding what is the optimal lending rate, which is quite an authoritarian and bold thing to do taking into account the fact the free market is commonly the best pricing mechanism, as history has shown. A large number of arbitrageurs from the private sector could do much better than the Fed since they have capital risk and could be compelled to take advantage of market inefficiencies for profit. One could also decry the impartiality of the Federal Reserve, which is unsurprising due to its shareholders who are some of the biggest and most influential banks in the United States. During the 2008 financial crisis and its aftermath, the Federal Reserve authorised 16.115 trillion USD worth of loans to such institutions. (Webster, 2011) This had occurred even when the 700 billion USD bailout from the Troubled Asset Relief Programme (TARP) was passed by Congress. Such behaviour is highly suspect, and the many members of society could decry it as an example of crony capitalism if it was more transparent. Of course, the necessity of these bailouts and loans are highly debatable but the facts of such massive loans remains. Constitutionality

The Federal Reserve was quite vehemently opposed by many of the founding fathers, notably Thomas Jefferson, but supported by others such as Alexander Hamilton. For example, the former had said: “If the American people ever allow private banks to control the issue of money. . . [they] will deprive the people of their property.” The latter part of the speech has an implication involving the violation of the Constitution; ‘[It is] the right of the people to be secure in their persons. . . and effects, against unreasonable searches or seizure, shall not be violated. . . ’ according to the Fourth Amendment. The observant may question how the Federal Reserve accomplishes the accusation; it is done simply by the manipulation of interest rates that affect inflation and deflation, which is really just controlling the purchasing power of the people. Over the 105 years, the US dollar has lost a little over 96% of its original purchasing power. (Smith, 2012) This reflects the recklessness of the Federal Open Market Committee (FOMC) and how they breach the supreme law of the land in the United States. Moreover, we have to classify the hierarchy of members of the FOMC to investigate another allegation. It is stated in Article II, Section 2, Clause 2, of the US Constitution, that ‘[The President] shall nominate, and by and with the Advice and Consent of the Senate. . . all other officers of the United States. . . but the Congress may by Law vest the appointment of such inferior officers. . . ’ The question of whether FOMC members are inferior as opposed to ‘normal’ officers has now arisen. Under the ruling of the US Supreme Court during a federal court case on the Independent Counsel Act, it mentioned distinctive features of inferior officers; i.e. removability by a higher official in the Executive branch


11

other than the Presidency itself. (Morrison v. Olson, 1988) If we follow this train of thought, all members of the FOMC would be ‘normal’ officers. This is because such members serve for fourteen-year terms unless they are removed by the President for cause, as per Section 10.2 of the Federal Reserve Act. (Federal Reserve, 1913) The ‘for cause’ section is highly vague and likely a cause of frequent vexation for observers. Continuing on the main point, the five other members who are not from the Board of Governors are similarly ‘normal’ officers, even though they are elected by their regional Federal Reserve Bank directors. As seen from the extract above from the Constitution, such officers need to be nominated by the President with the advice and consent of the Senate. Board of Governors members are deemed constitutional from that clause but the other members of the FOMC are not. Hence, we can conclude that the other 5 members are unconstitutional. Utility

The Federal Reserve has been said to be a great savior of the economy, bailing out large failing banks and bringing the economy back from the brink of depression. However, this may not be the case if we carefully examine its record combatting recessions. Firstly, let us investigate the Great Depression. Causes are still being studied and discourse is being conducted still but a few main topic areas have been highlighted by major economic schools of thought. The unwise monetary policy of the Federal Reserve during the so-called ‘Roaring Twenties’ was a key factor since rates were lowered from approx. 5% down to slightly above 0%. Rate rises only started in 1928, which was too late, and the economy had already overheated. This ‘easy money’ policy increased borrowing and total debt to GDP levels to new highs and causing unnecessary risk. (Vague, 2014) This pattern of low-interest rates to fend off a recession, only for a bigger one to come next has since occurred, starting in 1987 when Alan Greenspan became chairman of the Federal Reserve. His successors like Bernanke too, have ‘succumbed’ to this low-interest rate policy, allowing a multidecade bull market in fixed income securities. Stock market crashes like the dotcom bubble and the great recession of 2008 can be attributed to such policies. (Kaul, 2015) Furthermore, the Federal Reserve has often underestimated many events and such statements are often incorrect. Chairman Bernanke had stated during 2008 that Fannie Mae Freddie Mac, government-backed mortgage companies, were ‘in no danger of failing’. (CBS News, 2008) With the benefit of hindsight, we now know that that statement was wholly fallacious since they both went bankrupt and had to be bailed out under the Troubled Asset Relief Programme (TARP). Central bank enthusiasts may say that this is a three-sigma event, in other words, a rarity, but the Fed does have a history of bad policies and calls, that may influence less experienced central bankers from the emerging markets. According to an article on CNBC, ‘The world’s central banks are making the same mistake the Fed made in the years leading up to 2007. . . the longer it goes on, the harder it is to withdraw monetary support for asset prices. . . ’ (Choudhry, 2013) Committing a mistake yourself is bad, but influencing others to do the same is contemptible of the Federal Reserve.


12

Judgement

The Federal Reserve to many is a good entity, providing government intervention when the economy warrants it. However, from our investigation, we understand that it is not so simple when we examine more deeply into its legitimacy and how well it performs its functions. It has proven to be an outstanding example of profligate bureaucracy. To one extreme, it is an illegal quasi-governmental agency that is not competent in doing its job but on the other hand, it does function as the lender of last resort, keeping liquidity in check and ensuring credit crunches can be resolved. Nevertheless, even if the power of the Federal Reserve is curtailed and regulated more stringently, the supreme law in the United States does not permit the existence of such a central bank, especially if one is constituted mainly of judges with the originalist view, therefore making the case from its abolition extremely strong.

Jerome Powell - Current Chairman of the Federal Reserve


13

A dark day for democracy - Peter Denton

On Tuesday 24th September it was decided the government’s proroguing of Parliament was unlawful. Judges in the Supreme Court said this was because the suspension “had the effect of frustrating or preventing the ability of Parliament to carry out its constitutional functions without reasonable justification”. In voiding the prorogation on the basis of it being unlawful, the ruling made the executive’s use of prorogation de facto illegal – something Corbyn acknowledged in criticising Johnson for “acting illegally”. While many celebrate this decision as they see it as preventing a rogue government excessively exert executive authority, the principle invoked to do so has grave ramifications. This marks a stark entry of the UK courts into the political arena, applying interpretations of our uncodified constitutions to political decisions, which are, by convention, left up to the discretion of the respected political decision makers. Devolving the power to decide which conventions should be followed by legal compulsion to an unelected judiciary has never been part of the English legal tradition – and for good reason. The UK functions on the basis of parliamentary sovereignty. Parliamentary behaviour, including that of collectives (the Lords, the Commons, the government, the opposition) and individuals (MPs, Lords, the Speaker) within Parliament is informally governed by uncodified constitutional conventions. Bercow tells us of the purely advisory nature of these conventions: “I am not in the business of invoking precedent, nor am I under any obligation to do so. . . if we were guided only by precedent nothing would ever change”. Laws made by Parliament are interpreted and enforced by the judiciary, and these laws apply to both Parliament as an institution and its parliamentarians. Prime Ministers may only appoint justices to the Supreme Court on recommendation from an unelected special selection committee, and there is no mechanism to withdraw justices from the court. This devolution of the power to decide what behaviours are de facto legal and illegal from parliament to the courts allows unelected judges to curtail parliamentary sovereignty; on the collective level of changing government in this instance, but the legal precedent is now set for attacks on other parliamentary behaviour. As a result, parliamentary sovereignty is now subordinate to the enlightened few who decide, entirely on their own accord and not off existing law, whether our elected officials have acted properly – and consequently whether our elected officials’ actions are to be annulled or not.


14

For the same reason we do not have any unaccountable legislatives bodies (Parliament being accountable given the supremacy of the Commons) the courts should not be given sovereign power: if they don’t come down on your side, there’s absolutely nothing you as a citizen can do about it, for the courts are sovereign in this new ability to curtail parliamentary sovereignty. Take the 1637 ruling suggesting the King may tax without parliamentary consent. Of course, this is different in the sense that it was a ruling in favour of non-interventionism and so is not expanding court power into the political realm – nevertheless the potential inadequacy of rule by the enlightened judges is demonstrated by this case. There are several further examples in recent political history where a similar ruling, on this new precedent, could be taken by courts, stifling the sovereignty of both collective and individual components. To start recently, Bercow allowed MP Dominic Grieve to table an amendment to a government procedural motion this January. This is not permitted by constitutional conventions. It would be reasonable to conclude his justification for reducing executive authority was political, given this deviation from convention had the effect of weakening the government’s Brexit negotiating position. On the basis of this new ruling, the courts could suggest that Bercow frustrated Parliament’s ability to carry out its constitutional function of properly scrutinising government behaviour by improperly allowing excessive scrutiny, stifling the government’s ability to govern, without reasonable justification since the Speaker, by convention, is impartial. Hence by the ruling of the judiciary - not a law of Parliament - Bercow’s decision to table an amendment could be ruled unlawful and reversed. A second example is Corbyn’s successful opposition to the government leading to an inability for it to govern with the support of the Commons, yet concurrent failure to either table a vote of no-confidence or vote for a general election. He has thus acted against constitutional convention, and in doing so has stymied parliament from carrying out its constitutional executive functions that require a government able to command the support of the house. Some may argue he is justified in doing so – it seems more likely not. To avoid Johnson setting the election after the 31st October, he could table a bill, setting a specific date for the election, before the 31st. Rather it seems he has cynically stymied parliament from carrying out its constitutional functions to gain a political advantage in any potential election, hoping the failure for Britain to exit the EU on the 31st October will split the Brexit vote between the Conservatives and the Brexit Party. This would not amount to justifiable grounds under the new precedent; for this reasoning is similar to the suspected reason Johnson stymied Parliament: to gain a political advantage to promote his agenda, only Corbyn’s agenda does certainly not have the backing of a national referendum. Thus, a court could rule Corbyn’s actions unlawful and either reverse his backing of legislation that has crippled the executive of Parliament, or (albeit more controversial and so unlikely) force an election. John Major prorogued Parliament for three weeks in 1997. This unusually long prorogation meant the release of a report into Conservatives taking bribes was delayed until after the 1997 election. Major contravened a constitutional convention


15

of short prorogation, in order to form a Queen’s Speech, and in doing so stymied Parliament’s constitutional function to provide scrutiny of MPs behaviour – information useful to the public. Under the new precedent set on the 24th September this year, a court could have ruled this prorogation unlawful, forcing Major to recall Parliament. Wilson’s calling of the first UK-wide referendum in 1975, on membership of the EU, could be seen as violating parliamentary sovereignty and the constitutional norm of representative democracy by invoking direct democracy. This time the courts would be deeming a broader collective within Parliament as having acted unlawfully – the House of Commons in passing this bill. While the Supreme Court is limited in its powers of judicial review from overturning primary legislation (laws passed by Parliament as a whole), there is no limit on the court from deeming behaviour of sub-components of Parliament, such as the individual Houses (the Commons and the Lords) as unlawful, and so overturning their decisions. On the basis of the 24th September ruling that the courts can overrule sub-components of Parliament’s sovereignty, the courts could void the Commons’ vote for the vote undermined parliamentary sovereignty. Similarly, under this new precedent, judges could rule the House of Lords’ rejection of the People’s Budget in 1909 was unlawful, breaching the constitutional convention the Lords accepts all proposed governments, and so preventing Parliament from carrying out its constitutional function to pass fiscal legislation. By now you probably get the point, there are many instances in which this ruling could allow courts to overrule and so violate parliamentary sovereignty – and I am sure, given my youth and accompanying dearth of knowledge of parliamentary history, there are countless other instances where the court could do so. You may find this parliamentary behaviour right or wrong. The point is, whichever side you fall on in each case, it can surely be agreed an unelected body should not be able to overrule Parliament’s components, unless it is applying laws passed by Parliament previously. Justice will not always come down on your side of the issue, as you might find in the 1637 case. If this is not done, parliamentary sovereignty is lost, and our country is stuck ad infinitum with uncodified constitutional conventions, selected and interpreted by unelected judges, dictating our politics. There is a solution to the constitutional problems posed by Johnson’s prorogation (and the other cases cited here). As stated in the introduction, Parliament is simply to pass laws enacting constitutional conventions if it deems them significant enough that they should be followed. This means the decision-making power is retained by a legislative body accountable to the electorate. Following John Hampden’s failed court case in 1637, Parliament passed a law in 1641 making the collection of Ship Money (what Hampden was challenging) illegal. It was never collected again. Following the abuse of prorogation under Stuart monarchs, the Triennial Act was passed in 1694, meaning Parliament had to meet at least annually. Following the Lords’ rejection of the 1909 Budget, the 1911 Parliament Act was passed, withdrawing the Lords’ veto on Budget bills.


16

By all means, a bill could have been rapidly passed by Parliament at the beginning of September, prior to the prorogation (as the Benn Bill was) making prorogation for more than a specific time period illegal. It could make exceptions for war time or could be made a temporary law with an expiry date if so desired. The failure of MPs opposed to Johnson’s prorogation to take necessary political steps to avoid it should not be used to justify a significant shift in the role of the judiciary, with unknowably significant damaging consequences for parliamentary sovereignty. Britain has rightly never subscribed to the platonic ideal of Philosopher Kings. Don’t let us due to a political mishap.

Bercow


17

Nobel Prize for Peace- Albi Tufnell The 2019 Nobel peace prize was awarded to Ethiopian prime minister, Abiy Ahmed “for his effort to achieve peace and international cooperation, and in particular for his decisive initiative to resolve the border conflict with the neighbouring Eritrea”. The war between the two countries began on 6th May 1998 and was sparked by a battle for the control of the border town, Badme. This initial war, often described as ‘two bald men fighting over a comb’, was purely over which country the town belonged to. However, due to Eritrea only being declared independent from Ethiopia in 1993, after a 30 year-long guerrilla war, the conflict divided the population of both countries. Abiy Ahmed had a peace deal signed within 3 months of being prime minister, ending a nearly 20-year military stalemate. He has also not only made a peace deal for his own country but has also engaged with peace deals between: Eritrea and Djibouti, Kenya and Somalia, the Sudanese military and their opposition. The Norwegian Nobel committee want to encourage and recognise his efforts promoting reconciliation and solidarity. The prize has been seen as a very positive step forward for Africa as a whole. Raila Odinga, the former Kenyan prime minister, explained that the prize is “an honour to our continent which has long been held back by wars”. Likewise, Ahmed, having received the award, declared that “It is a prize given to Africa, given to Ethiopia, and I can imagine how the rest of Africa’s leaders will take it positively to work on the peace-building process in our continent”. Along with António Guterres, the secretary general for the United Nations also said that “This milestone has opened up new opportunities for the region to enjoy security and stability, and Prime Minister Ahmed’s leadership has set a wonderful example for others in and beyond Africa looking to overcome resistance from the past and put people first”. However, whilst Ahmed obviously deserves the prize and recognition for his efforts there are many who believe the decision ignored a key young activist, Greta Thunberg. Thunberg was the favourite to win the prize, at 2/5 with William Hill on the Thursday night. Obviously, Thunberg is someone who requires little introduction as to the amount of work she has done in promoting climate change awareness and the accountability she has placed upon so many of the world’s key political figures. Whilst some argue that it was because of the shortlist being made in January and the majority of Thunberg’s most impressive moments happening later in the year. The Nobel committee wanted to reaffirm the wishes stated in the will of Alfred Nobel, that the recipient should have advanced the ‘abolition or reduction of standing armies’. The question therefore arises whether or not there is a direct or linear relationship between climate change and armed conflict. Henrik Urdal, head of the Peace Research Institute Oslo, denies that there is a relationship and thus omitted Thunberg from the shortlist. This approach has received criticism as even the Pentagon deem climate change as a ‘threat multiplier’. But of course Thunberg has publicly declared that she does not protest for awards or prizes.


18

Alongside the peace negotiations, Abiy Ahmed has also initiated key reforms in Ethiopia that have given citizens hope for a brighter future and better life. Some of his many reforms included: discontinuing media censorship, legalising outlaw opposition groups and increasing the influence of women in political and community life. His very position of being the country’s first leader from its largest ethnic community, the Oromo, who have long complained of political, economic and cultural marginalisation, shows the progressive nature of his election. He also has fluency in 3 of the country’s main languages and is from a mixed Christian and Muslim background, furthering his representation of the people of Ethiopia. The Norwegian Nobel committee hopes that this award will strengthen Ahmed in his important continuing work for peace and reconciliation. As to those may feel that some candidates have been wrongly overlooked, the nominations and investigations related to the award of the prize are released in 50 years’ time.

Abiy Ahmed


19

Nobel Prize for Chemistry - Timothy Bracken

The 2019 Winners - John B. Goodenough, M. Stanley Whittingham, Akira Yoshino (left to right) This year’s Nobel Prize in Chemistry was awarded to John Goodenough (The University of Texas at Austin, USA), Michael Stanley Whittingham (Binghamton University, State University of New York, USA), and Akira Yoshino (Asahi Kasei Corporation, Tokyo, Japan and Meijo University, Nagoya, Japan) for their contribution to development of Lithium-Ion batteries. Lithium-Ion batteries have changed our world, with it being fundamental for today’s society to function. The smartphone, laptop, cameras, electrical cars are only a few devices to name which use the Lithium-Ion battery. But what exactly is a Lithium-Ion battery and how does it work? To answer this question, we’ll need to look at what a battery is. The basic working of a battery is quite simple. A cell (battery is one or more cells) is made up of two electrodes connected in an electric circuit, with an electrolyte that can accommodate charged species. The electrodes are separated by a physical barrier, to prevent a short circuit from occurring.

Basic Battery


20

When the battery is powering the circuit, it is in the discharge mode. In the discharge mode, there is an oxidation reaction occurring at the negative electrode (anode), and a corresponding reduction reaction occurring at the positive electrode (cathode) with the electron being replenished from the circuit. With this, there will be a constant flow of electrons from the anode to the cathode creating a circuit. The first-ever electrochemical battery was created by Alessandro Volta, an Italianborn physicist, in the 1800s. He is deemed the father of the modern battery. He created the Voltaic Pile which was composed of stacks of alternating discs of copper and zinc separated by a piece of cardboard soaked in brine. The zinc was acting as the anode releasing electrons into the circuit, this was the oxidation process. Whereas the copper was the cathode. In the air, the copper became partially oxidized to copper oxide (CuO) and was then reduced to copper. The voltage of the cell was between 0.8-1.1 V, depending on conditions. The voltaic pile can be described as a primary battery meaning that it is non-rechargeable. In the mid-1900s there became a need for better batteries and created a demand for better configurations. Lithium is the lightest metal with a density of 0.53 g ¡ cm−3 , it also has the lowest standard electrode potential of -3.05V. This makes lithium ideal for a high-density and high-voltage battery. Nevertheless, there is a problem that the lithium is very reactive and needs to be shielded from water and air. This means a non-aqueous electrolyte had to be used. Thus, the idea of taming lithium was born. In 1958 William Sidney Harris wrote a thesis on the electroplating of different metals in different cyclic ester solvents. One of the solvents used was propylene carbonate, it was shown that it had electrochemical properties with alkali metals, for example, lithium halides. Around the same time Y. Yao and J.T. Kummer showed that sodium ions can move at the same rate in a solid as in molten salt. In 1967 John Newman developed a theory on ion transfer in electrochemical cells building on these ideas.

Whittingham’s Battery In 1972 Belgirate, Italy, a conference arranged by Brian C. H. Steele, brought the leading battery scientists at the time and solutions of taming lithium were discussed. At that time, it was assumed that metallic lithium would be the anode


21

and therefore there was an interest in finding complimentary cathode materials. Following the work from Y. Yao and J.T. Kummer and others, materials with high electrode potential that were able to accommodate lithium ions at a fast ion transfer rate became of particular focus. Therefore lithium-containing structures were studied, the properties of these materials concerning how they intercalated with alkali metals were tested under reductive conditions. A list of factors was developed: 1. Have accessible electronic band structures enabling a large, constant intercalation free energy change over the entire stoichiometry range 2. Be able to accommodate the guest ion over a wide stoichiometric range with minimal structural change (topotactic intercalation) 3. Display high diffusivity of the alkali ion within the structure 4. Allow the intercalation reaction to proceed reversibly 5. Display good electronic conductivity 6. Be insoluble in the electrolyte, and display no co-intercalation of electrolyte components 7. Be able to operate under close to ambient conditions. A group of materials that became very important were the metal chalcogenides of the form M X 2 . The M being a transition metal (Mo, W, Ti, etc.) and the X a chalcogen atom which is a group 6 atom (S, Se, etc.). In 1965 Walter Rßdorff showed that Titanium Disulphide (TiS2) could host lithium ions. This was done by the fact that the TiS2 structure was lamellar, arranged in layers between which lithium ions could become intercalated. M. Stanley Whittingham, Fred Gamble, Jean Rouxel and co-workers further demonstrated the intercalation effect of the lithium into LixTiS2 material over the whole stoichiometric range (0 < x ≤ 1). Whittingham following from these ideas researched the use of these materials and in 1976 with Exxon Research and Engineering Company a working, rechargeable battery was developed. The T iS 2 acted as the cathode, Lithium as the anode and LiPF6 as the electrolyte in propylene carbonate as the solvent. The cell had an electromotive force of 2.5V. This was the foundation of commercial batteries later developed at Exxon. These cells initially were composed of Lithium as the anode, T iS 2 as the cathode and the electrolyte was lithium perchlorate (LiClO4 ) in dioxolane as the solvent. However, this proposed a problem as the lithium was not completely tamed and would form dendrites on the anode which would then break through the barrier, reaching the opposite electrode causing a short circuit that would cause a potential fire hazard. The problem seemed too hard to solve leading to the abandonment of these types of batteries being developed. Therefore, a new angle was tried based on the idea of an ion transfer cell configuration. This was demonstrated in 1938 by Rßdorffin, where hydrogen sulphate ions were electrochemically shuttled between two graphite electrodes. Both electrodes could accommodate lithium ions. Nonetheless, the setup of using graphite to intercalate lithium ions with the intercalation of the electrolyte components caused the exfoliation of the electrode leading to its destruction.


22

There was a big development in 1979-1980 when John B. Goodenough and his co-workers at Oxford University, UK, discovered that Li x CoO2 (a type of metal chalcogenide MX2) could be used as the cathode material. The structure was almost identical to the Li x T iS 2 , with gaps between the Cobalt dioxide (CoO2 ). Goodenough thought that if the X in M X 2 is a small electronegative element leading to a positive ion uptake would result in a large negative free-energy change and resulting in a high cell voltage. Therefore, when the X was oxygen this led to a high cell voltage and the lithium ions were mobile enough so that they could be packed into a dense oxygen array. The Lithium was the anode and cobalt dioxide as the cathode with an electrolyte of LiBF4 in propylene carbonate. This finding brought about the use of anode materials with higher potentials than lithium metal. In 1985 Akira Yoshino and his group at the Asahi Kasei Corporation identified that petroleum coke could be used as the anode. Thus, Yoshino could create an efficient, working lithium battery based on Rüdorffin’s Ion transfer cell configuration. Using the heat-treated petroleum coke as the anode and Goodenough’s Li x CoO2 as the cathode with an electrolyte of LiClO4 in propylene carbonate, also with a barrier of polyethylene or polypropylene. With all these discoveries and developments it eventually culminated into the first-ever commercial Lithium-Ion battery being released in 1991 by Sony and Asahi Kasei Corporation. This was based on Yoshino’s battery with a different electrolyte of LiPF6 in propylene carbonate, which was water-free. This ushered a new era of technology with the mobile phone revolution starting and similar devices (Tablets and Laptops), allowing electric cars to become viable and the storage of renewable energy possible. Work on Lithium-Ion batteries did not stop here but many advancements have since been made based on these principles, allowing for a sustainable future with our planet.


23

Nobel Prize for Physics - Tobias Southwell The Nobel Prize in Physics this year was awarded to James Peebles, Michel Mayor and Didier Queloz. The prize was awarded for contributions to our understanding of the evolution of the universe and Earth’s place in the cosmos but that’s rather vague so I wanted to talk more about what they did and why it was worthy of a Nobel Prize. James Peebles

Peebles was awarded the prize for his contributions to the theoretical framework of ’modern’ cosmology which some regard him as the father of. One of his key contributions was towards the prediction and analysis of the Cosmic Microwave Background Radiation. In the early stages of the universe it was so hot that atoms could not form due to the electrons being too energetic to fall into a stable orbit around a nucleus. The universe was a dense, hot and opaque cloud of subatomic particles for the first 400,000 years. During this period photons could not travel since as they were constantly interacting with and being scattered by the free electrons. Only once the universe cooled and atoms could form were photons able to move unimpeded as scattering only occurs with a charged particle and the atoms formed were neutral. The photons from this period can still be observed as background radiation which can be used to test predictions and develop our understanding of the early universe. In 1948 Ralph Alpher and Robert Herman predicted the existence of the CMB but their work was not as widely spread in the 1960’s. In 1964 Peebles, alongside Robert Dickie, again predicted the CMB - completely independently of Alpher and Herman. Furthermore, they provided an explanation for why the universe was initially at such a high temperature from which the CMB would form. However, the actual discovery of the CMB was an accident. Two radio astronomers, Penzias and Wilson, had built a microwave radiometer for communicating with satellites but were constantly detecting noise from every direction. To find an explanation for this noise they looked to researchers and found the CMB explanation. Peebles realised that predictions could be made for the amount of baryonic matter in the universe from the CMB observations which highlights the discrepancy between the calculated density of the universe and baryonic matter, indicating the presence of other matter. Observations and theoretical predictions show that only 5% of the universe is ordinary matter. Peebles proposed a cold dark matter candidate - the cold referring to its non-relativistic properties as it moves much slower than c.


24

Many have heard of Einstein’s biggest blunder - the Cosmological Constant (Λ). This constant was introduced to his general theory of relativity in order to let his theory agree with the scientific consensus at the time which was that the universe was static. However, Alexander Friedmann proposed that the universe was not, in fact, static and argued for the removal of the cosmological constant. The removal of the cosmological constant led Friedmann to the conclusion that the universe was expanding and observations from Edwin Hubble at the time showed him to be correct. Following this, Einstein is reported to have referred to his introduction of Λ as his biggest blunder. However, in 1984 Peebles reintroduced Λ as a different value to account for the ’missing’ density from our universe that is now referred to as Dark Energy. The existence of Dark Energy was discovered in 1998 as the universe was shown to be expanding, something that could not occur without the existence of dark energy - according to our current theories of cosmology. Didier Queloz and Michel Mayor

Queloz and Mayor were awarded the prize this year for their work on exoplanets particularly their discovery of the first exoplanet1 .

1 An exoplanet is any planet that exists outside of our solar system


25

The method they used to find the first exoplanet - 51 Pegasi B - is known as the radial velocity method. As a planet orbits a star, we know from Newton’s 2nd law that there will be an equal force acting upon the star. Fortunately for the star, it is much more massive and so the same force will have a smaller effect. For example, the Earth makes the Sun wobble at only 0.09 ms−1 which is extremely hard to detect. As the star wobbles around the shared center of mass between the exoplanet and the star, we can observe it moving either towards us or away from us depending on its current orbital position. By examining the doppler shift of the incoming light from the star, we know how fast it is moving and can work out its orbital period. The planet that Queloz and Mayor detected behaved quite unexpectedly. Astronomers assumed that other solar systems would behave similarly to ours and expected that the arrangement of the planets in this system would be similar. However, Queloz and Mayor realised that this was not necessarily the case and looked for planets in other regions around the star. What they found was quite surprising. A gas giant, similar to Jupiter, was orbiting the star. However, it was orbiting it extremely closely - about 0.05 the distance between the Sun and Earth. Following the discovery there was a sudden interest in exoplanets - now that people knew they existed they wanted to find and examine them. A new method for finding them was created - the transit photometry method. This method relies on observing the change in the intensity of the light coming from the star as the planet crosses it. This method allows the same information to be obtained as from the radial velocity method but additionally allows for the atmospheric composition of the planet to be seen through the absorption of certain frequencies of light. Since the initial discovery, over 4000 exoplanets have been found through ground-based observations and satellites such as the Kepler satellite and the more recent Transiting Exoplanet Survey Satellite (TESS). Some people question the point of looking for these exoplanets. They are missing the point of scientific research. However, if you are so inclined to answer such a foolish question, there are some more practical uses to this search. Firstly, looking for a candidate planet to move to. Our planet is not in great shape at the moment and the future is not looking good for our species so migration to another planet may be necessary. Unfortunately planetary migration is absolutely not feasible, at least not for a long time, so we should be worried. Furthermore, these planets can help us to better our understanding of planetary and solar system formation as only looking at our solar system can lead to false conclusions about the universe being drawn. All three of these physicists have made significant contributions to our understanding of the universe and undoubtedly are deserving of a Nobel prize.


26

Nobel Prize for Literature- George Dring The Nobel Prize for Literature has always been subject to criticism. Those perched in the heights of the soi-disant literary canon, such as Leo Tolstoy, Jorges Luis Borges, Vladimir Nabokov, Henrik Ibsen, and James Joyce were all passed over for the accolade and the association has been struck with criticism over a number of issues. The Swedish association has been accused of bias towards homegrown writers, letting Sweden’s antipathy towards Russia affect the choice and, recently, over the unforeseen choice of the lyricist Bob Dylan as recipient. Considering all this, and considering also the prize’s historic preference not to choose writers from combatant countries, the choice of Peter Handke seems rather curious. Handke has been fiercely criticised for his support of Serbian leader Slobodan Milošević, who was on trial for war crimes in relation to the atrocities carried out in Kosovo in 1999 and in Croatia in 1991 and 1992. Milošević was on trial for two counts of crimes against humanity right up until his death in 2006. Handke had extremely close ties with the Serbian leader, speaking at Milošević’s funeral in 2006. Handke is accused, due to having had a Yugoslav passport, of sympathies and loyalties toward Milošević and in two extremely controversial books: ‘A Journey to the Rivers’ and ‘Summer Addendum to a Winter’s Journey’ to cast doubt on whether Muslims were killed by Serbs as reported. In this conflict, more than 8,000 muslim men are said to have been killed. In reference to Milošević and his alleged link, Handke called the former leader ‘not a hero but a tragic figure’, which is perhaps an attempt to illuminate the leader’s peripeteia and deny his irredeemable cruelty. He also called himself a ‘writer and not a judge’ which, although interesting in this specific issue, can bring conclusions on his overall perception of writing as a whole; the importance of not resolving to an overall didactic message. In his 1970 novel, Die Angst des Tormanns beim Elfmeter (the goalie’s anxiety at the penalty kick), he rejects any presumption that literature should convey rational motivation or moral certainty. In a rather Kafka-esque fashion, the protagonist wakes up with an inability to join his experiences with the world and commits a murder on impulse. What follows is a sensory, fragmented third person perceptive of the world, in which the protagonist conflictingly demonstrates a heightened perception of the world through the impulsive and the primal. Handke displays a significant allegory of the disintegration of man in the modern world. In his first novel; Die Hornissen (the Hornets), Handke similarly plays with the idea of narratological fragmentation. He presents the drowning of one of two brothers through the perspective of a blind narrator who later relinquishes the narrative responsibility for the novelist to take over. The narrator reconstructs and questions his life, querying the truth of the event and his own purpose. The novelist skilfully highlights the non-linear nature of reality, drawing on the morphing of truth through the medium of time and hindsight.


27

In these two novels and various other pieces, the laureate provides conceptual originality and re-explores the role of the narrative as a medium for truth. When we look at these two novels in relation with the critics wishing to take the prize away from him, the potency of his work intensifies. Handke, in the purpose of his art and in calling himself a ‘writer not a judge’, foregrounds the necessity for inquiry, illumination, and contemplation, not diagnosis. So, when journalists chisel onto his wikipedia page such phrases as ‘genocide apologist’ and ‘ethically blind’ in acts of sentimento-rhetorical journalism, they are in fact perhaps crediting him with an even more profound insight into modern truth. What this win also does, on a more functional level, is to promote those artists from comparatively little read nations in the western canon. The real bias is our bias; the inability of the common reader to step outside the partition of English language writers. The Nobel Prize has given Handke recognition in his lifetime and has accelerated the reshuffling of the literary canon which is so vital to its artistic progression.

Peter Handke


Thank you for reading this issue of the Serpentes. If you would like to publish an article, book review or an intellectually challenging puzzle in the next issue, please email muez.khan2015@radley.org.uk


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.