Enigma - Issue 1

Page 1

ENIGMA Computer Science, Physics & Engineering Magazine 2021/22

issue

1


Message from the Editor Yash (L6R2)

Dear Reader, I founded Habs Turing Society in October 2021 to allow students from Habs Boys and Girls to develop their passion in Computing and Technology. 2021 was a year full of breakthroughs and innovations, and it felt right to have a place to discuss these advances and revolutions which will most definitely impact our lives in the future. Habs Turing Society is delighted to present the first edition of the schools’ Computing and Technology Magazine: ENIGMA! This edition covers a plethora of topics from cryptocurrency to cryptography! The world is continually changing with technology and whichever profession you are passionate about, I would highly encourage you to appreciate and engage with computer science trends. ENIGMA would not have been possible without the fabulous Turing Society committee team members: Robin, Kai, Anish, Anuva, Kayaan, and Amy. Thank you for your time and continuous flow of ideas. Additionally, thank you to Mr Franks who supported me while bringing Turing Society to Habs. And finally, thank you to each and every contributor to ENIGMA for the commitment and passion you put into these articles – they are all to an incredible standard, and it is encouraging to see your enthusiasm for Computing and Technology. For anyone who is hoping to apply for a Computer Science or Engineering related degree, I am sure you will be able to find a passion within the range of articles we have in this edition. The best way to start your journey is by reading interesting journals such as the MIT Technology Review, completing hands-on projects and accomplishing online course such as Harvard University’s CS50 program. Feel free to ask me if you have any questions, and I would be more than happy to help. Come along to Turing Society on Thursday lunchtimes in B12. I hope that you enjoy reading the first edition of ENIGMA: the Computing and Technology Magazine of the Haberdashers Schools. Yours sincerely, Yash (L6R2)


ENIGMA

Computer Science, Physics & Engineering Magazine

Contents 4

12

Introduction to Turing and the Enigma Machine

Smart Cities – a Smart Choice?

6

16

By Niccolo (10S1)

Will Bitcoin be the Currency of Tomorrow?

Can we create something better than us?

By Jai Shah (10C2)

By Aarav Rajput (8H)

8

P VERSUS NP: Implications of P = NP By Devarshi (10J2)

10

An Introduction to Parallel Computing: How Computers Multitask By Rajarshi (10J2)

18

How will our Jobs and Businesses be influenced by the “supremacy” of AI?

26

Will artificial intelligence ever be a threat to human kind? By Yash (L6R2)

28

We’re getting ‘touchy-feely’ By Eleni (L4/Year 7)

31

Quick Laughs

32

By Faraz (11R2)

Aske Level spotlight

22

By students in Lower Sixth (Year 12)

Cryptology: the ones and zeros that protect our privacy By Asher (11R2)

34

References


Sir Alan Turing

Introduction to Turing and the Enigma Machine Who was Alan Turing? Alan Turing was an English mathematician, computer scientist, logician, cryptanalyst and philosopher. Turing was highly influential in the development of theoretical computer science, providing a formalisation of the concepts of algorithm and computation with the Turing machine, which can be considered a model of a general-purpose computer. He is also widely considered to be the father of artificial intelligence. Turing was most well known for his work during World War II. During the world war, Turing worked

4

for the Government Code and Cypher School at Bletchley Park, Britain’s codebreaking centre. For a time he led Hut 8, the section that was responsible for German naval cryptanalysts. Here, he devised several techniques for speeding the breaking of German ciphers, including improvements to the pre-war bombe method, an electromechanical machine that could find settings for the Enigma machine. Turing played a crucial role in cracking intercepted coded messages that enabled the Allies to defeat Axis powers in many crucial engagements, including the Battle of the Atlantic.

| ENIGMA: Computer Science, Physics & Engineering Magazine |


The enigma machine

What is the Enigma machine? The Enigma machine is a cipher device developed and used in the early-to mid-20th century to prevent commercial, diplomatic ad military communication. It was employed extensively by Nazi Germany during World War II. The Enigma Machine was considered so secure that it was used to encipher the most top-secret messages. The Enigma has an electromechanical rotor mechanism that scrambles the 26 letters of the alphabet. In typical use, one person enters text on the Enigma’s keyboard and another person writes down which of the 26 lights above the keyboard

illuminated at each key press. If plain text is entered, the illuminated letters are the ciphertext. Entering ciphertext transforms it back into readable plaintext. The rotor mechanism changes the electrical connections between the keys and the lights with each keypress. The security of the system depends on machine settings that were generally changed daily, based on secret key lists distributed in advance, and on other settings that were changed for each message. The receiving station would have to know and use the exact settings employed by the transmitting station to successfully decrypt a message.

ENIGMA: Computer Science, Physics & Engineering Magazine |

5


Will Bitcoin be the currency of tomorrow? By Jai (10C2) The main reason as to why Bitcoin could be the currency of tomorrow is that it is easier to validate and secure transactions with Bitcoin compared to fiat currencies. This is due to the blockchain recording all transactions using a decentralised and distributed ledger that is very hard to tamper with. As a result, Bitcoin is a secure method of transferring funds between multiple parties. Unlike fiat transactions, the cryptography used helps to have higher security by using complex computational algorithms such as SHA256 to generate a hash (which is a string of numbers and letters that verifies the information’s validity). Secondly, Bitcoin may be seen as a “hedge against inflation as its supply is capped at 21 million (source: The Daily Mail 31/12/20, page 77),” which will be reached after 2140 making its value stable. 6

This is unlike fiat currencies where there is no limit to the amount of the currency that can be printed. This means that after 2140, Bitcoin will not have any (hyper)inflation or (hyper)deflation unlike fiat currencies where governments are continuously changing the money supply. Already, people have started to look away from fiat currencies for investment purposes; due to money printing in the year 2020, “the public will have lost faith not just in the currency, but in the government establishment’s monetary and economic policies as well (source: Goldmoney, 2020).” No banking privileges are needed to access your Bitcoin wallet (only an internet connection, a public key and a private key). This is beneficial for poorer citizens of countries, as Bitcoin wallets do not cost money to setup which is unlike bank accounts in many countries.

| ENIGMA: Computer Science, Physics & Engineering Magazine |


“31.5% of the world’s population live without bank accounts (source: World Bank, 2018)” and of those about a “quarter of unbanked adults [over 18] live in the poorest 20 percent of households within their economy (source: Global Findex database).” Like all cryptocurrencies, Bitcoin is not geographically based which removes exchange rate losses when transferring money across national borders. Also, transferring funds with Bitcoin takes minutes, rather than days like with many fiat currency transactions, and there are no extensive transaction costs like when multi-currency fiat transactions go through multiple banks. The previous points have discussed why fiat currencies may be replaced with Bitcoin, but we also need to consider whether Bitcoin can be replaced by other cryptocurrencies. We need to compare the popularity of Bitcoin against its closest competitors. As of 4th January 2022, Bitcoin’s market cap (total value) is approximately US$883Bn whilst Ethereum’s is US$453Bn and Solana’s is US$52.8Bn (source: Coinmarketcap, 2022). From these statistics alone, we can see that it would be very hard to overtake Bitcoin’s market cap unless there is a spike in the value of these alternative cryptocurrencies. The main reason as to why Bitcoin may not be the currency of tomorrow is its volatility compared to fiat currencies currently. Some examples of its most famous crashes include: June to November in 2011 when it lost 93% of its value; December 2017 to 2018 when it lost 83% of its value, dipping from its previous all-time high, US$20k, to US$3.5k due to market manipulation; and on March 12th 2020, Bitcoin lost around 50% of its value in one day because the day before, the WHO announced that the “global COVID-19 outbreak can now be described as a pandemic.” Volatile currencies are not as easy to value in the long term compared to more stable currencies - like fiat currencies - therefore making them less useful as payment for goods and services which is one of Bitcoin’s main purposes.

to increase, but this is not possible as Bitcoin’s code has already been written and cannot be easily changed. An alternative to this would be a new cryptocurrency that is able to process more transactions but is based on Bitcoin’s open-source code. Already there are cryptocurrencies based on Bitcoin’s source code whose main purposes are for faster transactions, such as Bitcoin Cash, which can process over 100 transactions per second. However this would not count as “Bitcoin being the currency of tomorrow,” since Bitcoin Cash is not Bitcoin. If Bitcoin’s use increases, governments are likely to add restrictions on Bitcoin transactions (even though it is a decentralised currency) by adding laws that restrict the movement of Bitcoin. This was done with gold in the US and “from 1933 to 1974, it was illegal to own gold bullion without a license (source: First National Bullion, 2020),” and even today gold can be “confiscated by the federal government in times of national crisis (source: CMI Gold & Silver, 2020).” Although these exact measures cannot be achieved with Bitcoin, it shows what governments would do when they start losing their power to control their citizens’ wealth. Already, the FCA have said that “crypto derivatives (financial products based on the price of bitcoin and other cryptocurrencies) would be banned from sale to retail consumers from [November 2020] (source: The Daily Mail 31/12/20).” People may not choose to store wealth in Bitcoin, as they wouldn’t want to risk losing their wealth or having restrictions put on what they can do with it. To conclude, from the evidence presented, I believe that Bitcoin in its current form will not be the currency of tomorrow. The most likely scenario is that another improved variant of Bitcoin (with all of the original’s benefits) may be used as the “currency of tomorrow.” Bitcoin has both benefits and flaws as explained, but the difficulties cannot easily be dealt with as the software is not available for easy modification.

Another important reason as to why Bitcoin may not be the currency of tomorrow is the limited number of Bitcoin transactions available to be processed compared to centralised systems such as credit card companies. Bitcoin can only process 3 to 7 transactions per second while the credit card company Visa can process “over 65,000 transactions per second (source: Visa’s website, 2017).” If Bitcoin were to be a more widely used currency, the number of transactions would have

ENIGMA: Computer Science, Physics & Engineering Magazine |

7


P VERSUS NP: Implications of P = NP By Devarshi (10J2) The problem of P Versus NP is one of the most famous problems of theoretical computer science, pertaining to whether the two classes, P and NP, are equivalent, with P being the class of all problems that can be solved “quickly,” while NP is the class of all problems that can be verified “quickly.” When the time for a given algorithm to be executed is calculated, rather than being done so in seconds, it is calculated relative to the number of inputs, n,

8

which the algorithm receives. For example, say you have a list of unsorted numbers, and you are tasked with writing an algorithm which is able to sort this list in ascending order. This algorithm compares each number with the one immediately after, starting from the beginning, moving the first number to the right if it is larger than the one on the right, or not moving its position at all, if it is smaller than the number on the right, and repeating this process until no

| ENIGMA: Computer Science, Physics & Engineering Magazine |


changes are made. The time it takes to execute this program is proportional to n2, meaning this program can be executed in polynomial time. By “quickly,” what is actually meant is polynomial time, making the class P the problems that can be solved in polynomial time (or less). P (standing for polynomial time) is therefore a subset of NP, as if a problem can be solved in polynomial time, we can verify that this solution is correct in polynomial time. However, on the other hand, if a solution takes time proportional to 2n to be solved, it takes exponential time to be solved. Such problems constitute the rest of NP (standing for nondeterministic polynomial time), which can also be described as the class of problems solvable in exponential time (or less). Another way of phrasing the P versus NP problem is, “if a problem can be verified in polynomial time, can it be solved in polynomial time?” Within NP is another special subset of problems, referred to as NP-complete problems, currently none of which are contained in P. These problems are such that if any one of them were proved to be in P, it would prove that P = NP, as they lead to solutions for the rest of the NP problems. Considering that some problems previously thought to be out of the bounds of P, and within NP, were subsequently proved to be in P, the idea of an NP-Complete problem possibly being in P isn’t outlandish, despite at first glance it seeming that P is definitely not equivalent to NP. If these two classes were equivalent, it would have significant implications on modern life. For example, on one hand, security and passwords would be rendered useless, as these operate on the basis that they can’t be cracked with a polynomial time algorithm. Many encryption schemes in cryptography hinge on number factorisation, an exponential time algorithm. Such methods serve no purpose if a polynomial time algorithm of solving them is discovered.

Though on the other hand, protein folding is an NP problem. If such an algorithm was found to prove this a P problem, there would be a “quick” algorithm for finding a cure to cancer for example. More realistically, optimisation problems in large part make up the class of NP, which, if a P time algorithm was found for solving such, daily life would become much more efficient than it currently is. One example is the travelling salesman problem, which is stated as such: “Given a list of cities and the distances between each pair of cities, what is the shortest possible route that visits each city exactly once and returns to the origin city?” This translates to problems such as the most effective way for a delivery vehicle to travel. Not only would packages and such arrive much more quickly, but carbon emissions would also be significantly reduced. Another such problem is the job shop problem, which involves a given number of jobs taking place on a given number of machines, and attempts to minimise the total time taken to complete all jobs. A P solution to this problem would vastly improve the way jobs are scheduled. Transport rooting is also another NP problem, and finding a perfectly efficient solution to such would significantly reduce time taken travelling in general. In general the economy would be substantially improved if such solutions could be found, while remarkable progress would be seen in the field of machine learning, and artificial intelligence would make notable headway. Quality of life would be greatly enhanced, where our lives would become much more efficient, and we would find ourselves with much more time on our hands, with less being wasted, provided that P is somehow equivalent to NP.

ENIGMA: Computer Science, Physics & Engineering Magazine |

9


An Introduction to Parallel Computing: How computers multitask By Rajarshi (10J2) Parallel computing, without a doubt, is one of the most important innovations in the field of computer science and engineering. For example, without parallel computing, the invisible background tasks that we take for granted on our computers would induce incredible slowdown and would make our digital experience rather dull and perhaps even frustrating. So what actually is parallel computing? In this short article, we explore this question as well as some key concepts of parallel computing. Parallel computing is defined as a “type of

10

computation in which many calculations or processes are carried out simultaneously.” Essentially, computationally heavy tasks or instructions, which involve lots of very similar operations carried out very many times, for example, some sort of mathematical processing or operations on some set of data like a very large array of numbers, may be divided up between many computational units, be it computers on a network, or a single CPU with many processor cores, and then be executed simultaneously with the results of the parallel computations then collected at the

| ENIGMA: Computer Science, Physics & Engineering Magazine |


end. This differs from serial computation, where instructions are executed one at a time, one after the other. Serial computation, historically, was rather inadequate for serious scientific computation and simulation. This was particularly inconvenient, especially in fields where simulations are done with some deadline, for example, weather forecast simulations in meteorology - the weather must be forecasted before a certain day otherwise it is not a forecast! Also, another crucial use is the prediction of landing conditions and sites of NASA rovers on other planets and moons, as weather conditions will constantly change, these simulations are only valid for a certain period and so must be executed within a certain time frame. Furthermore, parallel computing was born out of a need to overcome the physical limitations of “frequency scaling” in order to improve computer performance. Early in computing history, frequency scaling was the primary way in which runtime of a program was decreased. Frequency scaling refers to increasing the clock frequency of a processor, while maintaining everything else constant, thus decreasing the average time taken to execute each instruction, thereby decreasing runtime. However, as frequency is increased, the power consumption of that chip in turn increases. Eventually, when power consumption is too high, the chip will overheat and thermally throttle, which would actually decrease performance and increase runtime. Therefore, with frequency scaling being shown to be an inefficient way to improve performance, chip manufacturers designed processors with multiple computational units, known as cores, leading to “multi-core” processors, which were more power efficient as the frequency of each core could be relatively low but the number of cores high and runtime would drastically decrease. A typical computer nowadays will have 4, 6 or even 8 cores - high-end workstations even have 10 or 12 cores and servers may even have over 50 cores. Moore’s Law - the observation that the number of transistors in a processor doubles about every 2 years may be applied to processor cores, meaning the number of cores in a processor will double about every 2 years or so which has somewhat been true in recent years.

In order to take advantage of parallel computing, serial software applications must be restructured and parallelised by the programmer. We now discuss key concepts of parallel software design. Often, subtasks of a program are referred to as “threads.” Programs often involve the use of some resource such as a file, and often many threads within a program require access to some shared file. If synchronised access is done incorrectly, many bugs can occur and even corruption of files and data. For example, take a program with 2 threads, A and B. Suppose both A and B have the instructions: 1. Read File X 2. Append “Hello” to the contents of File X 3. Save File X to disk In a serial program, without threads, instead the task would simply be repeated, one would expect File X to have in addition to its original contents “HelloHello”. However, if the access to the resource is not synchronised, thread B may read the file before thread A has saved the file to disk, meaning that when thread B saves its changes to disk, changes made by thread A will be lost. This is known as a “race condition”. There is a solution to this: locks. Threads lock and unlock a lock which prevents access to a file when it is locked, causing one thread to pause until the other has finished with the resource and unlocked the lock - this is known as “mutual exclusion”. The instructions of A and B would become: 1. Lock File X 2. Read File X 3. Append “Hello” to the contents of File X 4. Save File X to disk 5. Unlock File X While thread A is making changes, thread B will hang, ensuring synchronised access to the resource. Excessive use of locks, and threads in general, cause overhead which may actually increase runtime in some cases. In most programming languages, there are dedicated parallelism libraries, for example, the threading and multiprocessing modules in Python 3

ENIGMA: Computer Science, Physics & Engineering Magazine |

11


Smart Cities – a Smart Choice? By Niccolo (10S1) Smart cities are urban areas containing and applying modern and advanced technology. These technological advancements are used to improve ease of living, life and working environments and are embedded into government systems such as power grids and transport. However, the Smart City definition is becoming “more vague and more imprecise; the temptation of instilling technology into every aspect of our cities… is complex and remains indistinct”. This begs the question: What really is a smart city?

12

One potential definition is that a smart city uses information and communication technologies to: • I mprove efficiency of physical infrastructure via Artificial Intelligence and data analytics. • E fficiently and effectively allow authorities to improve the collective intelligence of the city in a process known as e-governance. • I mprove the overall intelligence of the city, therefore increasing innovation throughout the whole city.

| ENIGMA: Computer Science, Physics & Engineering Magazine |


One possible technology to be used by smart cities is the Internet of Things, a global network which collects and shares information about people and cities to optimise features. Today, it is most commonly used in manufacturing, transportation and utility organisations, but it has the potential to be used in many more areas. One important goal of smart cities is helping to reduce climate change. This can be achieved using renewable energy sources such as wind or water power. Over 100 cities (smart or otherwise) worldwide now report that over 70% of their power is from hydropower, geothermal, solar and wind power. This shows that building blocks already exist for a more sustainable and less damaging future and that helping reduce/stop climate change via smart cities is a future possibility. A smart city could further help by generating its own energy. It is theoretically possible to create a completely self-sustaining city, one which has smart agriculture and manufacturing which helps build the city using only its own resources, such as produce grown in vertical farms or collected rainwater. Such a city would be extremely useful for many reasons, such

as cutting down on carbon emissions by reducing imports and exports. If many cities around the world were self-sustaining, this would have an impact on overall sustainability. We must therefore aim to create self-sustaining smart cities, as the technology to do so already exists. While it may seem like smart cities belong only in the future, many examples already exist. Many major world cities such as Madrid, London, Milan, New York and Amsterdam are experimenting with these technologies. For example, the Madrid Intelligence Project (MiNT) was designed to create a city centred more around the people with initiatives such as using smart grid technology to analyse data on traffic congestion and the timing of street lights and using technology to allow all residents to voice their opinion on issues. Other cities are committed to converting themselves to smart cities, such as Milton Keynes, 50 miles from London, which created the MK:Smart initiative in 1967 to expand and improve the city and its standard of living, whilst also meeting environmental regulations. It now has the second highest concentration of digital and tech small and medium enterprises of any UK city outside London.

Madrid is developing into a smart city via the Madrid Intelligence Project (MiNT)

ENIGMA: Computer Science, Physics & Engineering Magazine |

13


This being said, smart cities are still in the early stages and much of the work to achieve these goals lies in the future. According to TWI Global, one of the world’s foremost independent research and technology organisations,

A smart city should provide an urban environment that delivers a high quality of life to residents while also generating economic growth. This means delivering a suite of joinedup services to citizens with reduced infrastructure costs. This becomes increasingly important in the light of the future population growth in urban areas, where more efficient use of infrastructure and assets will be required. Smart city services and applications will allow for these improvements which will lead to a higher quality of life for citizens. Smart city improvements also provide new value from existing infrastructure while creating new revenue streams and operational efficiencies to help save money for governments and citizens alike.

Whether it is using energy more efficiently or improving accessibility and the way of living, it is becoming increasingly clear that the concept of smart cities must be applied to be the future of human civilisation. However, smart cities are not necessarily without potential downsides. Citizens could be unhappy with not being able to shape their community and living area, implementing the best technology at the time could lead to short-sightedness as it could be thought that there would be no need to continue to improve, policies surrounding using personal data could eradicate or damage personal privacy, smart cities may increase the gap between classes, as the upper classes live in technologically advanced cities while the lower classes are left behind, and it there is a risk that a smart city design might be more focused on optimising the technology rather than the citizens’ lives. With the majority of these potential issues, the problems stem from the design and running of the cities. Therefore, keeping people in mind needs to be at the forefront of the design and implementation of smart cities. In conclusion, the concept of smart cities is clearly a very ambitious idea and one which, if implemented correctly, will be able to greatly benefit the human population. It will be necessary for the people who design and run the world’s smart cities to understand potential downsides and think of ways to overcome or minimise them. If this is done, then we could truly see the future of industrialisation and technological advancements done in the best way possible

” 14

| ENIGMA: Computer Science, Physics & Engineering Magazine |


Smart cities will monitor all kinds of data such as traffic flows

ENIGMA: Computer Science, Physics & Engineering Magazine |

15


Can we create something better than us? By Aarav (8H) AI has been an instrumental part of human technological development. Over time we have been able to develop this technology by increasing the intelligence levels of AI machines, and if we continue to improve them at this constant rate, will they surpass our intelligence? Will they have absolute control over us? Most people working in the AI industry predict that this will inevitably happen in the future, and it will be our downfall if we are unable to control it. This is known as the technological singularity.

Is it possible? We have already developed AI which can hold a conversation, read emotions and try to engage in one type of work or another. These robots have human features such as arms and legs which they can control. Ray Kurzweil, Google’s director of

16

engineering, a highly regarded futurist has predicted technological singularity will happen in the 2040s. This prediction is also backed up by SoftBank CEO Masayoshi Son, another famous futurist, who is certain that singularity will happen this century, possibly by 2047. It is not a question of if, but one of when. Other people such as Tesla and SpaceX CEO and founder Elon Musk and late physicist Stephen Hawking have accepted that technological singularity will come sometime in the near future, but that it will be a ‘doomsday’ according to Elon Musk. Elon Musk’s solution is the Neuralink, a device inserted into human skulls to create an interface with the brain. He says that this will enable humans to keep up with AI development and stop total control of Humans.

| ENIGMA: Computer Science, Physics & Engineering Magazine |


What happens after the technological singularity? The University of Oxford philosopher Nick Bostrom says that making machines will be the last invention of humanity. This is because once we have created a superintelligence, the machines will have a much better-inventing capability than we have and will do it at a rapid rate. In a TED talk, he refers to ‘cures for aging, space colonisation, self-replicating nanobots or uploading of minds into computers’. He says that these are ‘consistent with the laws of physics’. These are problems that the world’s leading scientists have been trying to solve for years, and AI can develop this and in a very short period. A superintelligence has an intellectual power that is unfathomable for humans and therefore can achieve its purpose and goal. Instead of humans controlling the future, it would be craved by AI, a very different concept for humans. He uses a very powerful example of humans giving AI the goal to smile. When AI is not a superintelligence its method of doing this is by performing actions causing humans to smile. Once technological singularity has been reached, AI comes up with a much more efficient method to solve this: taking control of the world and sticking

electrodes into the facial muscles of humans to cause a permanent smile. If the AI’s goal was that of solving a difficult maths problem, at superintelligence, the AI would once again take control of the world and turn it into a supercomputer. This will allow an increase in thinking capacity. Bostrom goes on to say that it enables AI to cause us harm, even though we do not approve of this. One of his solutions is that if we give AI a certain objective, the definition of its objective should incorporate this and therefore annihilate AI’s chance to have total control over us. Humans can see threats and try to eliminate them. A superintelligence will not be able to do this, but at a much more successful rate than us. Nick Bostrom believes that we will need to create superintelligent AI, that even if/when it escapes it is still on our side due to the sharing of values.

Conclusion It is possible to create a superintelligence and cause technological singularity. However, this will cause problems, and as Elon Musk said, it will be a ‘doomsday’. As it is possible to create superintelligence, we should also be able to find a way to control it through shared moral values. Therefore, we create a superintelligence that can help the world in many possible ways.

ENIGMA: Computer Science, Physics & Engineering Magazine |

17


How will our jobs and businesses be influenced by the ‘supremacy’ of AI? By Faraz (11R2) In the last six decades, we have made huge scientific gains and more recently in the late 1990s, the technological progress has developed rapidly. Specifically, in the last decade alone algorithms in machine-learning have grown with

18

the advancement of enhanced deep learning and smarter neural networks. However, this has led to some perhaps unexpected impacts on the economic front which need to be discussed.

| ENIGMA: Computer Science, Physics & Engineering Magazine |


The impact of AI on businesses AI’s uses across major industries AI can be used to improve businesses in areas including predictive maintenance, where its ability to analyse large amounts of data from audio and images can easily detect anomalies in factorybased assemblies such as in aircraft engines. In businesses with high demands of logistics, these new AI systems can effectively optimize routing for consumer delivery, thus improving fuel efficiency and reducing delivery times all allowing the business to cut costs and gain better reputations. In sales and retail, combining customer data and past transactions with social media monitoring, all help generate individualized recommendations for the consumer, allowing retailers to target their demographic accurately. In many of these cases,

AI adds value by improving on previous analytical techniques to encourage industries to develop. Even though many organisations have already embraced AI, the pace is not consistent. Only around half of the respondents in a 2018 McKinsey survey on AI adoption say their companies have embedded a minimum of one AI system within their business with another 30% still steering towards AI. The gap between early AI adopters and others is rapidly widening. Sectors that are highly ranked in the DMCC Industry Digitization Index (Figure 1), such as high-level tech and financial services, are leading AI adopters and have the most ambitious AI investment plans. As these firms expand their AI adoption, those lagging behind will find it harder to catch up in the long-term.

Figure 1

ENIGMA: Computer Science, Physics & Engineering Magazine |

19


How will our jobs and businesses be influenced by the ‘supremacy’ of AI?

Figure 2

The impact of AI on jobs Around half of all work is currently automatable. They include mainly physical activities in highly predictable, structured environments (such as food preparation), as well as in data collection and processing, which together account for roughly half of the activities across the major sectors in developed economies. The chart below shows the existing risk of automation within different job types (Figure 2). The major effects on work The pace and extent to AI’s adoption and its impact on jobs will depend on several other factors besides availability. Among these are the cost of distribution, adoption, and the labour market dynamics, including its supply and quality of work. How all these factors play out across sectors and

20

countries will vary, and countries will be depending on the labour market dynamics to give them the correct workforce. For example the United States is an advanced economies meaning that jobs affected by automation there could be more than double that in India due to a wider availability to AI. Due to the multitude of these considerations, it is difficult to make accurate projections of the future, although a rough outline has been considered: Firstly: Jobs lost. The adoption scenario of AI till 2030 suggests that about 15% of the global workforce (400 million workers) could be removed by automated systems. Secondly: Jobs gained. The number of jobs gained through these are thought to be 133 million globally by 2022 and reach 555-890 million by 2030 suggesting a net gain. However, it is important to note that in many emerging economies with

| ENIGMA: Computer Science, Physics & Engineering Magazine |


younger populations, there already is an everincreasing need to provide jobs to people entering the workforce. Furthermore, in developed economies, people are remaining longer in their jobs due to better living conditions, so fewer still are entering the workforce leading to long term global inequality. Substantial switches in workforces will also occur Due to the adoption of AI, millions of workers will have to change their careers within companies, sectors and sometimes even countries due to processes such as brain drain. While occupations in physical environments and in data analysis will decline, others providing issues to automation will grow. These include managers, teachers and other professionals in a field, but also construction workers and plumbers etc, who work in more unpredictable physical environments. Secondly, workplaces and workflows will change as people work alongside machines. For instance, as self-checkouts have been integrated into shops, cashiers have now moved from scanning goods to helping customers. Lastly, automation will pressurise average wages jobs in advanced economies like ours. Quite a few of these are presently automatable such as in manufacturing and accounting. As of this, higherwage jobs will grow significantly, especially in highskill and professional fields.

Currently, the UK has an interest in embracing AI due to its anticipated help in economic growth and businesses. Other countries should also find a strong need to keep up with global leaders such as the US and China. However, for this broad deployment of AI, solutions in the challenges mentioned must be made to ensure benefits. We must: • Invest in and continue to innovate in AI research. • Support the existing digitisation efforts by building upon the foundations of AI. To tackle the possibly unwanted impacts of AI we must continue our strong economic and productivity growths necessary for a rise in jobs. This is vital to productivity as this creates more businesses encouraging the establishment of new jobs. To address this, we should: • Edit education systems to focus on STEM skills and in critical thinking to address the change in workplaces for future generations. • A lter wages and incomes by considering the prestige of a job to encourage more people to explore those fields.

To tackle these transitions many economies are starting afresh due to shortages in relevant skills as well as declines in transition support for workers and on-the-job training.

Conclusion The potential benefits of AI to business and the economy should encourage business leaders to embrace and adopt AI. At the same time, the impending challenges to its adoption and its impact on work cannot be overlooked.

ENIGMA: Computer Science, Physics & Engineering Magazine |

21


Cryptology: the ones and zeros that protect our privacy By Asher(11R2) Privacy in our communications is something we often take for granted currently; however, it hasn’t always been this way. We live now in a society where most people can just take a device out of their pocket and communicate with someone across the world, knowing that what they sent cannot be read by anyone without serious malicious intent. The internet has enabled much openness and at the same time unprecedented privacy in our day-today communications, but how often do you sit down

22

and think about how the bits, the string of ones and zeros, that make up your message are changed in such a way that you and the recipient can see it and communicate with delays of mere milliseconds, and yet any eavesdropper would be hard done by trying to decode it within an hour. Cryptology, the study of codes, is a battlefield on which the war for secrecy is and has constantly been fought for millennia. On one side are cryptanalysts, constantly trying to figure out new ways to crack encryption schemes

| ENIGMA: Computer Science, Physics & Engineering Magazine |


and new algorithms to better break those codes. On the other side are cryptographers, always thinking up new ways to hide secret messages and methods of encryption. These sides have fought through four epochs of cryptology, each new era defined by different approaches and technology. From ancient times through world wars to the present day, and beyond, let me lead you on a journey of discovery.

Picture 1 - The ‘Tabula Recta’ commonly used in vigenere cipher encryption and decryption

Picture 2 - This cipher wheel shows how Atbash can be encoded with each letter on the inside corresponding to one on the outside

The first epoch and by far the longest was the manual epoch, which focussed on coming up with obscure but rule-based techniques of encryption that could be easily and quickly learnt and performed without need for specialist equipment. An early example of this is the Atbash cipher used by many biblical commentators to analyse percieved deeper meanings within the text. This cipher relies on a rather simple method of replacing each letter in the plaintext with its reverse. So, in English, A would become Z and B would become Y and so on. This was a rather quick and easy to encode which was one of the primary factors in creating a cipher in the manual epoch. Moving forward into 16th century Italy, we find the Vigenère cipher, one of the first polyalphabetic cipher. This cipher was a departure from previous methods of monoalphabetic substitution ciphers, in which each letter was replaced by a single different letter. This was necessitated by developments of cryptanalysts, being the devising of techniques such as frequency analysis. This technique involved making a table of the frequencies of each letter and mapping it to its closest English match. Polyalphabetic ciphers, as the name suggests, use many alphabets to replace each letter. In the Vigenère cipher, a table known as the tabula recta is used to speed up simple addition of letters with A as 0, B as 1 etc. This in itself was nothing special. The important component was the keyword, which provides the second component of the addition. This keyword is repeated as many times as necessary to cover the whole length of the message, and so creating a vital weakness. However, this cipher remained widely undeciphered throughout Europe all the way through until the 19th century, when Friedrich Kasiski published a test for cracking it, which involved exploiting the length of the keyword by finding repeated patterns and then solving the letter of each alphabet individually. This ended the success of this individual cipher; however, the cipher’s widespread use showed the key principles of the epoch. It was easily reproducible, and reasonably quick to encode and decode, as well as being easy to teach and requiring only secrecy for the key, a small bit of information easier to conceal. The ciphers of this epoch continued to develop while always sticking to the key principles of simplicity and speed, however this was about to change.

ENIGMA: Computer Science, Physics & Engineering Magazine |

23


Cryptology: the ones and zeros that protect our privacy The Second Epoch of Cryptology was the machine age, featuring one of the most prominent and infamous examples of cryptology in the form of the enigma machine. Following WWI, the focus shifted from simple reproducible manual systems to making machines do some of the work. This was not a complete step forward though, as the machines at the time still widely employed the same techniques such as polyalphabetic substitution, but on a more sophisticated scale and with much better speed enabled by the machines. The emphasis of this era was developing mechanical techniques for encoding schemes and chaining them together to massively multiply the possibilities. One big emphasis of the cryptographers in the machine age was to try to create as many different solutions as possible, to prevent brute forcing. The focus in the machine age shifted away from keys, but onto similar small pieces of information, this time settings. A key feature of many of the cryptographic machines

were that a machine on the same settings as another could decode what the machine encoded. The problem for cryptanalysts lay primarily in finding a way into these systems whose settings were normally rotated daily. This meant that the messages would have to be cracked within the day for its results to be at all useful. This meant that the cryptanalysts turned to machines, recognising that only machines could beat machines. The key to cryptanalysis in this epoch was the ability to design a machine which was able to efficiently reverse the actions of the encoding machine. The developments of such machines allowed the codebreakers to try cribs – possible decrypts – to determine the correct settings. Conversely, the primary aim of the cryptographers in the machine age was to obscure the settings of the machine and to keep secret the machine itself away from prying eyes. However, this epoch and all its techniques were swept away by the wave of computers.

Picture 3 - The famed “Enigma”, a rotor machine used by Germany during World War II

24

| ENIGMA: Computer Science, Physics & Engineering Magazine |


Picture 4 - The RSA algorithm is one of many widely used today to keep our information safe

The internet and computerisation of communication security mark the Third Epoch of Cryptology, the one which we are now in. As previously mentioned, the internet brings with it many incredible developments in the field of communications, with incredible privacy available. This developing era has shifted the focus of the entire field of cryptography. Computers have enabled brute-forcing on a level not previously possible, so necessitating a shift in thinking. The most successful cryptographic schemes are now widely published algorithms, ironically for the centrality of secrecy in this field, with the focus on developing new and better algorithms. The central aspect of all the successful algorithms so far is that they involve a mathematical problem which is easy to compute one way, but extremely hard to inverse, by which I mean computationally hard and time-consuming. In the computer age we use public-key cryptography and symmetric-key algorithms. The former is exemplified by the RSA algorithm which uses two separate keys, one public and one private to generate a number which is used to encode the information being sent. This algorithm focusses on the computational challenge of finding prime factors, compared to the ease of generating the product of two primes. This is often an inefficient method of sending data as it takes a longer time but is more secure and so is used for distributing the key for a symmetric-key algorithm. One such algorithm is AES-256, the most secure variant of the Advanced Encryption Standard algorithm set as the standard for encryption by the United States government in 2001, which work on a different and simpler basis. Rather than having two different keys which work together to encrypt and decrypt the message, these ciphers use a single key. This key is used by the algorithm to encipher the data in a way that it can be deciphered at the other end by the receiving computer which has a copy of the key. This creates a system which should be impervious

to eavesdroppers, however in reality, algorithms with shorter keys are vulnerable to brute-force attacks. As Moore’s Law tells us, computers are constantly and rapidly getting more and more powerful. In the past this meant that every few years the algorithms across the web would have to be updated. The US government, however, held a competition to design an algorithm that was efficient, and so could be run on slower computers, but was also adjustable so that the key length could be increased as the power of computers increase. This is where we stand now, in an arms race between the two sides, with computers getting more and more powerful, however we are hovering at the edge of a precipice right now. The next epoch is fast approaching, and it can change everything. The fourth epoch is the era of quantum computers, to some a doorway to endless possibilities, and to others a harbinger of doom. One thing we do know now is that everything in the world of communications security will change when the first general quantum computer arrives. The security of the RSA algorithm relies on the absence of an algorithm which can split a number into its prime factors. Although an efficient algorithm does not exist in our epoch, the prospect looms of Shor’s algorithm. This algorithm, designed by and named after Peter Shor is a quantum computer algorithm which can determine prime factors of a number in a relatively short period of time. This means that RSA’s days are numbered, but already many in the sphere of cryptology are shifting their focus to a post-quantum world with initiatives such as PQCRYPTO. Quantum computers bring a scary future with them; however, their immense power and removal of many of the widely used algorithms opens a door for a new standard. We stand at the threshold of a world of bright, new possibilities and exciting opportunities await, perhaps even to fulfil the dream of true privacy in communication. The future is arriving fast.

ENIGMA: Computer Science, Physics & Engineering Magazine |

25


Will artificial intelligence ever be a threat to human kind? By Yash (L6R2) Excitement. Innovation. Passion. These are the true feelings one should have towards Artificial Intelligence (AI); without it, humanity would scarcely be as advanced. Of course, humans are disinclined to change. They always have been. Be it the Agricultural Revolution, The Industrial Revolution, or The Information

26

Revolution. Yet each of these prior revolutions demonstrate that humans have been okay after them – in fact, much better. After the Agricultural Revolution, innovative farming methods and improved livestock breeding meant that food production dramatically increased, leading to population booms and superior health for most.

| ENIGMA: Computer Science, Physics & Engineering Magazine |


After the Industrial Revolution, humans no longer focussed on menial tasks due to the automation of physical labour, leading to a much-increased standard of living and a boost of the global economy. This pattern of positive impact proclaims the necessity and benefit of change to humanity. Humans adapt. Lives blossom. Economies nourish. The AI widely used in the present circumstances is “narrow” AI, meaning that the technology is exceptionally ‘intelligent’ with a specific task; namely, the use of facial recognition for security, or machine learning for YouTube’s recommendation algorithm. In these particular activities, there is no doubt that AI performs considerably greater than any human could, and for that, AI deserves praise. Unquestionably, there will be certain threats to humankind from AI: job automation, data privacy, socio-economic inequality and a potential arms race. Yet, these threats will certainly not initiate the AI apocalypse! The moment this “narrow” AI is tasked to apply its ‘intelligence’ to another specific task, it goes down like a lead balloon. Certain scientists believe that the idea of an Artificial General Intelligence (AGI) is becoming more and more plausible. This concept of AGI can be thought of as the precise point where AI advances so much that it will be equally matched to the almighty human intelligence. Ultimately, this hypothetical ability means that AI will learn, and prosper, at any human task. If the AGI further develops, we may reach the technological singularity: where humanity is at risk of eternally losing control of AI, since machines

will continuously learn how to re-create superior versions of themselves. Humans – do not despair! A plethora of research scientists, such as the respected Penrose, are adamant that this technological singularity is unimaginable to achieve for multiple reasons. In our world, we have seen fundamental limits to science – for instance, humans will never be able to accelerate faster than the speed of light. Accordingly, there could be fundamental limits to the recursive, self-developing nature of machines. More significantly, machines are not sentient beings – with no conscience, no self-awareness, and no mind of their own. Feynman eloquently described a computer as

“ ”

A glorified, high-class, very fast but stupid filing system. ,

validating the fact that machines have no intense desire to “destroy humanity” or replace your job. IBM’s DeepBlue is not planning to wake up tomorrow, conclude that humans are purposeless, and destroy our planet. It has no goal to do so. It is not coded to do this. Indeed, even 50 Nobel Laureates concluded that disease, ignorance, terrorism, climate, population rise and Trump are greater threats to humankind than the ‘innocent’ AI.

ENIGMA: Computer Science, Physics & Engineering Magazine |

27


We’re getting ‘touchy-feely’ By Eleni (L4/Yr7)

Technology that stimulates and engages our tactile senses is the next step in enabling an immersive digital user experience. We are at the dawn of the ‘big bang’ of the metaverse, and in this virtual reality (VR) world haptic technology is an essential element. Haptics will have applications across so many diverse areas of life. Aside from the metaverse and augmented reality, I’m most excited by the transformation of touch screens becoming available to the consumer and enterprise, which will deliver tactile information to the user (think textures, shapes, temperature etc.) as visual does at present. This will transform the way we communicate/ interact; consumer experiences, and industry. As we move towards more interactions within VR, this

28

is both intriguing, but also concerning. I give some personal thoughts towards the end of this piece on the impacts of haptics, combined with VR. But firstly, I take a quick look at what haptics is, and how it has developed. What is haptic technology? ‘Haptic’ comes from the Greek ἁπτικός (haptikos), meaning tactile. Haptic technology refers to any technology which can create ‘an experience of touch by applying forces, vibrations, or motions to the user’. The global Haptic Technology market size is projected to reach USD 25240 Million by 2027, from USD 9487 Million in 2020, at a CAGR of 14.5% during 2021-2027.

| ENIGMA: Computer Science, Physics & Engineering Magazine |


Development of haptic technology Haptic technology has been in general use for some years, for example use of vibration in mobile ‘phones; in gaming, education and robotics. Apple introduced its Taptic Engine in 2015, then later, the iPhone. Apple’s Taptic engine is a haptic user interface feedback system. This haptic device or actuator can apply relative amounts of force to a user through actuation of a mass that is part of the haptic device. Various forms of tactile feedback, generated by relatively long and short bursts of force or vibrations, can convey information to the user. Wide spread use of haptic technology for the consumer and enterprise has yet to happen, as it needs to be affordable and scalable. As a haptic research company, Immersion Corporation developed a VR exoskeleton for gaming, but found “It is always about cost, the power it’s going to use and how big it is…” essentially leaving only universities and research labs able to afford the technology. Current and future developments

Haptic technology falls into three areas; graspable - taking advantage of kinaesthetic senses; wearable creating tactile sensations through pressure, heat and friction, and touchable interfaces, ‘data-driven haptics’. Changing the way users interact with their devices touch screens should see results in the next few years, with some already in production. Until now, it has been a one-way feedback, for the screen’s benefit. Haptic technology will be an inverse of this, allowing the user, in turn, to receive tactile information from their screens. According to Dr. Cynthia Hipwell, at the Department of Mechanical Engineering, Texas A&M University, who is leading

a team investigating how the finger interacts with the screen,

This could allow you to actually feel textures, buttons, slides and knobs on the screen…It can be used for interactive touch screen-based displays, but one holy grail would certainly be being able to bring touch into shopping so that you could feel the texture of fabrics and other products while you’re shopping online.

But developing the technology of the interface is complex and dependent upon a number of variables, such as different users and environments. There are multiple physical fields occurring at the same time between the user’s finger and the device.

We’re looking at electro-wetting effects (the forces that result from an applied electric field), electrostatic effects, changes in properties of the finger, the material properties and surface geometry of the device, the contact mechanics, the fluid motion, charge transport…

says Hipwell.

ENIGMA: Computer Science, Physics & Engineering Magazine |

29


We’re getting ‘touchy-feely’

To boldly go where doctors have gone before, but now with haptic technology – a Digital Rectal Examination at Imperial College, London.

Applications of haptic across all areas of life Haptic technology promises multiple benefits across a range of applications, and is an essential element in realising VR/augmented reality environments. Haptics used in medical training would allow for safe surgical practise (no accidentally flatlining your patient), while providing ‘real-time formative and summative feedback on the examination technique’, and within education would enable richer learning experiences. Across manufacturing, logistics, transport; and from product design, to user experience, haptics is a key growth area within consumer technology. Haptics will transform the consumer experience of brands, and will help reshape the way we communicate, both at work, on VR collaborative platforms, to social and intimate connections. The many benefits of haptics, will also bring some unexpected outcomes. What would our legal response be to offences committed within VR/ augmented reality? For example, Teledildonics, or ‘sex-technology’, uses haptic feedback to make ‘virtual sex’ a reality, but may open participants to instances of hacking and ‘virtual groping’. What impact will haptic technology have on wider society and ourselves? Information Technology can change the way we live, and how we think. As Jaron Lanier, the American

30

Computer Scientist and Futurist writes in ‘You are not a Gadget’

“ ”

It is impossible to work in Information Technology without also engaging in social engineering. While there are many benefits, haptics could bring some, perhaps unforeseen, challenges. For example, haptics could create realistic experiences of assault or murder in violent VR/augmented reality environments – could this overspill into real life? Could physical pain become part of the game? How would the law treat possibly new virtual crimes against the person (crimes against property may be protected by using blockchain technology). Will haptics make us ever more tracked and traced, in just living and undertaking basic essential transactions, and be mercilessly sold to? How many more hoops to jump through to retain privacy? How much of our data will we retain ownership of, or be forced to hand over because key services will be denied us? Are some things definitely better in IRL - including touch?

| ENIGMA: Computer Science, Physics & Engineering Magazine |


QUICK LAUGHS Why did the boy get fired from his keyboard factory job? Because he was not doing enough shifts.

Why did the developer become so poor?

Why was the mobile phone wearing glasses? Because it lost its contacts.

Because he used up all his cache.

Why did the computer arrive late at work? What made the Java developers wear glasses?

Because it had a hard drive.

They can’t C.

How do trees make use of the internet?

What is another name for apple juice? iPhone chargers.

They just log in

ENIGMA: Computer Science, Physics & Engineering Magazine |

31


Aske Level spotlight - By students in Lower Sixth (Year 12) Each student in Lower Sixth completes an Independent Research Projects called the Aske Level, which gives students the opportunity to focus on topics and issues beyond the academic curriculum. This experience has been enriching, and below are several Aske Level titles related to ENIGMA’s theme.

Sani A - Can Reinforcement Learning Simulate Complex Behaviour? Reinforcement Learning (RL) is a type of machine learning which imitates the intuitive nature through which living organisms learn, by providing a reward for successful actions. It is already used for many bespoke real-world applications, such as video compression, chemical analysis and controlling robots for manufacturing. However, a goal of many researchers is Artificial General Intelligence, AI that can perform many unspecific complex tasks as a living organism would. My project will evaluate the advantages and disadvantages of RL and whether it will be able to achieve this goal.

Kai P - What is the Technological Singularity? I am writing about the technological singularity for my Aske Level. The technological Singularity is a point in time where technological growth is so fast growing

32

| ENIGMA: Computer Science, Physics & Engineering Magazine |

that it becomes unpredictable and irreversible. There are multiple factors that contribute such as: Artificial Intelligence, Machine Learning and Intelligence Amplification.

Rahul S – Can Robots Empathise? My Aske Level is about the possibility of robots having emotional intelligence. I also consider the ethics behind artificial emotions and the impact it may have on society.

Yash Shah – The use of Machine Learning and Image Segmentation Algorithms to Enhance the Analysis of Neuroimaging and Detection of Brain Tumours In my Aske Level, I explored the use of machine learning and image segmentation algorithms to enhance the analysis of neuroimaging and detection of brain tumours. These algorithms provide an instrumental solution in supporting clinical decision. I wrote a program which takes 3D MRI images as inputs, and uses image segmentation algorithms to classify whether the input MRI scan has one of three types of tumours: edema, enhancing tumours or non-enhancing tumours (or is simply a background pixel). I utilised a 3D U-net model which makes use of convolutional neural network (CNN) layers.


Robin B – Can Computers Create Original Improvised Music? For my Aske Level I have explored ways in which computers can be used to generate original music in real-time. I have analysed a range of systems which vary in their focus; some systems are designed to interact convincingly with other human music and pay little attention to existing musical constructs, whilst others are focussed on imitating famous human musicians. I have suggested a theoretical system that would combine the use of random generation and machine learning in an informed manner to create a more convincing output.

Anurag C - Predicting Flight Delay using Machine Learning My Aske Level is intended to be a practical project where before the passengers even step on the plane, the machine learning algorithm will predict by how many minutes the flight will be delayed. It will be accomplished by analysing previous flight data and using machine learning to determine which variables affect the delay of a flight the most and hence predict whether a flight will be delayed and by how much.

James C – How Important is the Decentralisation of Bitcoin? My Aske Level talks about Bitcoin, and the importance of decentralisation. Bitcoin is a global peer-topeer monetary settlement network which relies on cryptography to ensure the network can be trusted while needing minimal trust in the participants. While decentralisation is one of the defining principles of Bitcoin, it has its advantages and disadvantages. A key advantage of decentralisation is the ability to make Bitcoin resistant to political attacks, and ensuring the network remains censorship resistant. This results in

a politically neutral money which cannot be weaponised by governments or other large centralised bodies. While this may seem trivial to the average person in a free country, it is estimated that less than 1/5 of the world’s population lives in a fully free country. Even within countries which are considered free, political pressure from governments against matters which they feel strongly about is not unusual. Bitcoin is immune to this and this has been shown on numerous occasions throughout Bitcoin’s brief but eventful lifetime. Another positive of Bitcoin’s decentralisation is the resilience of the network. It can be seen that at the beginning of 2021, almost half the network went offline as China banned Bitcoin mining. Despite this, the network continued to operate as it had done since its creation. These advantages are absolutely not possible with centralised systems. The biggest thing that many would consider to be a disadvantage of Bitcoin is the maximum number of transactions which can take place every day. Compared to a payments network such as Visa or Mastercard, Bitcoin barely scratches the surface of the number of transactions which can take place. As members of the Bitcoin network must store copies of the entire blockchain, the transaction limit is in place to keep the blockchain of a manageable size. In order to keep the network as decentralised as possible, the blockchain must be as easy as possible to store. A solution to this problem has been proposed and is known as the lightning network. This allows many more Bitcoin transactions to take place every second, although it does have drawbacks of its own. Overall, it is clear that without decentralisation, Bitcoin would not be what it is. While decentralisation adds its own constraints and challenges to the network, it ensures Bitcoin remains pure and honest, and free from the unpredictability of human nature.

ENIGMA: Computer Science, Physics & Engineering Magazine |

33


References Will Bitcoin be the currency of tomorrow?

sess-gold-bars/ (Accessed: 31 December 2020)

The Guardian (25 June 2014). Bitcoin explained and made simple. Available at: https://www.youtube.com/watch?v=s4g1XFU8Gto&t=97s (Accessed: 23 December 2020)

CMI Gold & Silver (2020). Gold Confiscation Myths. Available at: https://www.cmi-gold-silver.com/gold-confiscation-1933/ (Accessed: 31 December 2020)

Paddy Hirsch (8 December 2013). Fiat Money, explained. Available at: https://www.youtube.com/watch?v=U8Yn5jT8Hyc (Accessed: 23 December 2020)

Cash Matters (20 February 2020). 31.5% of the world’s population live without bank accounts (World Bank, 2018). Available at: https:// www.cashmatters.org/blog/315-of-the-worlds-population-livewithout-bank-accounts-world-bank-2018/#:~:text=31.5%25%20 of%20the%20world’s%20population,accounts%20(World%20 Bank%2C%202018) (Accessed: 1 January 2021)

Simplilearn (26 July 2018). Cryptocurrency Explained | What is Cryptocurrency? | Cryptocurrency Explained Simply | Simplilearn. Available at: https://www.youtube.com/watch?v=8NgVGnX4KOw (Accessed: 23 December 2020) Media Shower (29 April 2019). 7 Benefits of Decentralized Currency. Available at: https://www.youtube.com/watch?v=zTCRFTkgl9o&feature=emb_logo (Accessed: 24 December 2020) The Times Money Mentor (18 December 2020). Should you invest in bitcoin? Available at: https://www.thetimes.co.uk/money-mentor/ article/invest-bitcoin-cryptocurrencies/ (Accessed: 24 December 2020) BBC Newsnight (24 January 2018). How does Bitcoin mining work? - BBC Newsnight. Available at: https://www.youtube.com/ watch?v=cRxL2GKDU5E (Accessed: 24 December 2020) SmartAssest (19 July 2019). The Different Types of Banks. Available at: https://smartasset.com/checking-account/types-of-banks (Accessed: 26 December 2020) Religare (27 November 2012). How is the price of a Currency Determined? – SmarterWithMoney. Available at: https://www.youtube.com/watch?v=Z5jx27LioSw (Accessed: 26 December 2020)

The Times (27 December 2020). Ruffer bets big on bitcoin. Available at: https://www.thetimes.co.uk/article/ruffer-bets-big-on-bitcoin-tbmpcn0qs (Accessed: 1 January 2021) Gold Money (23 April 2020). Anatomy of a fiat currency collapse. Available at: https://www.goldmoney.com/research/goldmoney-insights/anatomy-of-a-fiat-currency-collapse (Accessed: 1 January 2021) MoneyWeek (17 December 2020). Bitcoin has hit a new record high – and this time the professionals are piling in too. Available at: https://moneyweek.com/investments/alternative-finance/bitcoin/602505/bitcoin-has-hit-a-new-record-high-and-this-time-the (Accessed: 1 January 2021) The Daily Mail (31 December 2020). Year of the Bitcoin Boom. Page 77 (Accessed: 2 January 2021)

Federal Reserve Bank of St Louis. Functions of Money, Economic Lowdown Podcasts | Education | St. Louis Fed. Available at: https:// www.stlouisfed.org/education/economic-lowdown-podcast-series/ episode-9-functions-of-money#:~:text=The%20characteristics%20 of%20money%20are,%2C%20limited%20supply%2C%20and%20 acceptability (Accessed: 26 December 2020)

Coinmarketcap (17 February 2021). Market Cap. Available at: https://coinmarketcap.com/ (Accessed: 17 February 2021)

Bitcoin.com - Official Channel (15 April 2020). Why is there more than one version of Bitcoin? - Bitcoin 101. Available at: https:// www.youtube.com/watch?v=3RB1-m08m6c&feature=emb_logo (Accessed: 27 December 2020)

Hardesty, Larry. 2009. “Explained: P vs. NP | MIT News | Massachusetts Institute of Technology.” MIT News. https://news.mit.edu/2009/ explainer-pnp.

Bitcoin Gold. What is Bitcoin Gold? Available at: https://bitcoingold. org/ (Accessed: 27 December 2020) Buy Bitcoin Worldwide (28 December 2020). Available at: https:// www.buybitcoinworldwide.com/how-many-bitcoins-are-there/#:~:text=How%20Many%20Bitcoins%20Are%20There%20Now%20 in%20Circulation%3F,adds%206.25%20bitcoins%20into%20circulation (Accessed 28 December 2020) Exodus (30 January 2020). Bitcoin Crash History: Not the First and Won’t be the Last. Available at: https://www.exodus.io/blog/bitcoin-crash-history/ (Accessed: 28 December 2020) Intelligent Economist (9 February 2018). Monetary System. Available at: https://www.intelligenteconomist.com/monetary-system/ (Accessed: 29 December 2020) Bosch Global (29 May 2019). Chain Reaction: Distributed Ledger Technologies (DLT) explained. Available at: https://www.youtube. com/watch?v=NKAanYdic9Q (Accessed: 29 December 2020) Paul Schrodt Money.com (1 March 2018). Cryptocurrency Will Replace National Currencies By 2030, According to This Futurist. Available at: https://money.com/the-future-of-cryptocurrency/ (Accessed: 30 December 2020) https://usa.visa.com/dam/VCOM/download/corporate/media/ visanet-technology/aboutvisafactsheet.pdf Cointelegraph (12 October 2019). Terrorism and Crypto: Evidence from Ex-CIA Analyst. Available at: https://www.youtube.com/ watch?v=lyJfxpaYxRw (Accessed: 31 December 2020) First National Bullion (3 February 2020). Is it legal to possess gold bars? Available at: https://firstnationalbullion.com/is-it-legal-to-pos-

34

GlobalFindex (2017). The Unbanked. Available at: https://globalfindex.worldbank.org/sites/globalfindex/files/chapters/2017%20Findex%20full%20report_chapter2.pdf (Accessed: 1 January 2021)

| ENIGMA: Computer Science, Physics & Engineering Magazine |

Pixabay (18 February 2021). Images and Pictures. Available at: https://pixabay.com/ (Accessed: 18 February 2021) P VERSUS NP: Implications of P = NP

Sipser, Michael. 2013. Introduction to the Theory of Computation. N.p.: Cengage Learning. Up And Atom. 2020. “P vs. NP - The Biggest Unsolved Problem in Computer Science.” Youtube. https://www.youtube.com/watch?v=EHp4FPyajKQ. “What would be the Impact of P=NP? [closed].” 2012. Software Engineering Stack Exchange. https://softwareengineering.stackexchange.com/questions/148836/what-would-be-the-impact-of-pnp/148964. An Introduction to Parallel Computing: How computers multitask Gottlieb, A., & Almasi, G. S. (1989). Highly parallel computing. Benjamin/Cummings. Barney, B., & Frederick, D. (n.d.). Introduction to Parallel Computing Tutorial | HPC @ LLNL. HPC @ LLNL. Retrieved January 12, 2022, from https://hpc.llnl.gov/documentation/tutorials/introduction-parallel-computing-tutorial Goldberg, D., Patterson, D. A., Hennessy, J. L., & Asanovic, K. (2003). Computer architecture : a quantitative approach. Morgan Kaufmann Publishers. Rauber, T., & Rünger, G. (2013). Parallel Programming: For Multicore and Cluster Systems. Springer Berlin Heidelberg. Processes and Threads - Win32 apps. (2021, January 7). Microsoft Docs. Retrieved January 12, 2022, from https://docs.microsoft.com/ en-gb/windows/win32/procthread/processes-and-threads Krauss, K. J. (n.d.). Safe But Not Sorry: Thread Safety for Performance. Develop for Performance. Retrieved January 12, 2022, from


http://developforperformance.com/ThreadSafetyForPerformance. html Smart Cities – a Smart Choice? https://digileaders.com/smart-cities-need-smart-thinking/ https://www.geotab.com/uk/smart-city-solutions/ https://www.ibm.com/blogs/internet-of-things/what-is-the-iot/ https://internetofthingsagenda.techtarget.com/definition/Internet-of-Things-IoT#:~:text=Generally%2C%20IoT%20is%20 most%20abundant,some%20organizations%20toward%20digital%20transformation https://www.iotworldtoday.com/2018/04/16/why-milton-keynes-onesmart-cities-world/ https://patimes.org/making-our-cities-smart-madrids-movement/#:~:text=Madrid%20is%20the%20smartest%20of,operates%20and%20 incorporates%20its%20citizens.&text=The%20technology%20 used%2C%20ICTs%2C%20are,distribute%2C%20store%20 and%20manage%20information https://www.itsinternational.com/its4/its5/its6/its7/feature/smart-cities-journey-not-destination

Institute of Labor Economics: Thomas Gries, Wim Naudé (November 2018) Artificial Intelligence, Jobs, Inequality and Productivity: Does Aggregate Demand Matter? Retrieved from: http://ftp.iza.org/ dp12005.pdf McKinsey Digital (July 2016) Where machines could replace humans – and where they can’t (yet) Retrieved from: https://www. mckinsey.com/business-functions/mckinsey-digital/our-insights/ where-machines-could-replace-humans-and-where-they-cant-yet McKinsey Global Institute (September 2018) Notes from the AI frontier: Modelling the impact of AI on the world economy. Retrieved from: https://www.mckinsey.com/featured-insights/artificial-intelligence/notes-from-the-ai-frontier-modeling-the-impact-of-ai-on-theworld-economy McKinsey Global Institute Analysis (April 2018) The real-world potential and limitations of artificial intelligence Retrieved from: https:// www.mckinsey.com/featured-insights/artificial-intelligence/the-real-world-potential-and-limitations-of-artificial-intelligence UK Commission for Employment and Skills (April 2016) Working Futures 2014-2024, Evidence Report 100. Retrieved from: https:// dera.ioe.ac.uk/26069/1/Working_Futures_final_evidence_report.pdf

https://www.mksmart.org/

Will AI ever be a threat to human kind?

https://www.rcrwireless.com/20161121/big-data-analytics/mk-smartcity-tag31-tag99

Komlos, J. “Thinking about the industrial revolution. Journal of European Economic History”. (1989). 18(1), 191.

https://www.urban-hub.com/cities/fine-tuning-smart-in-madrid/

Wirth, Norbert. “Hello marketing, what can artificial intelligence help you with?.” International Journal of Market Research 60.5 (2018): 435-438.

https://www.weforum.org/agenda/2018/03/clean-energy-canprovide-100-of-a-city-s-electricity-here-s-how/#:~:text=The%20 state%20of%20play%20%E2%80%93%20major,%2C%20geothermal%2C%20solar%20and%20wind https://en.wikipedia.org/wiki/Smart_city Can we create something better than us?

Built-In Magazine. 6 July, 2021. “Dangerous risks of AI” [https://builtin.com/artificial-intelligence/risks-of-artificial-intelligence] Goertzel, Ben. “Artificial general intelligence.” Ed. Cassio Pennachin. Vol. 2. New York: Springer (2007). Preface pages.

https://neurobanter.com/2014/12/09/should-we-fear-the-technological-singularity/

AI Multiple. 6 November, 2021. “When will singularity happen?” [https://research.aimultiple.com/artificial-general-intelligence-singularity-timing/]

https://futurism.com/separating-science-fact-science-hype-how-far-off-singularity

Penrose, Roger. “The Emperor’s New Mind”. Oxford University Press, 1989

https://www.youtube.com/watch?v=MnT1xgZgkpk

Edward Elgar Publishing. “Competition Law for the Digital Economy” (2019), Page 75

https://www.youtube.com/watch?v=8nt3edWLgIg https://www.coinspeaker.com/wp-content/uploads/2018/09/innovation-quantum-computing-768x512.jpg https://revistaidees.cat/wp-content/uploads/2020/03/EL-SUEN%CC%83O-TRANSHUMANISTA.jpg How will jobs and businesses be influenced by the supremacy of AI? McKinsey Global Institute Analysis (July 2018) Survey on AI adoption in Businesses Retrieved from: https://www.mckinsey.com/~/ media/McKinsey/Industries/Advanced%20Electronics/Our%20 Insights/How%20artificial%20intelligence%20can%20deliver%20 real%20value%20to%20companies/MGI-Artificial-Intelligence-Discussion-paper.ashx Dubai Multi Commodities Centre (2018 Report) The Future of Trade Retrieved from: https://futureoftrade.com/ The Economist (24th April 2018) A study finds nearly half of jobs are vulnerable to automation Retrieved from: https://www.economist. com/graphic-detail/2018/04/24/a-study-finds-nearly-half-of-jobs-arevulnerable-to-automation Academia (December 2017) Skill Shift – The Workforce Retrieved from: https://www.academia.edu/37540692/Skills_Shift_-_The_ Workforce World Economic Forum (2018) The Future of Jobs Report Retrieved from: http://www3.weforum.org/docs/WEF_Future_of_Jobs_2018. pdf

Psychology Today. 1 June, 2006. “The Myth of Sentient Machines” [https://www.psychologytoday.com/gb/blog/mind-in-the-machine/201606/the-myth-sentient-machines] Indy100. “How humanity will end, according to Nobel Prize winners” [https://www.indy100.com/discover/nobel-prize-apocalypse-threats-humanity-survey-times-7927046] Science Alert. 11 April, 2021. “AI is not actually an existential threat to humanity” [https://www.sciencealert.com/here-s-why-ai-is-not-anexistential-threat-to-humanity] We’re getting touchy-feely https://www.wired.com/story/what-is-the-metaverse/ https://en.wikipedia.org/wiki/Haptic_technology https://www.bloomberg.com/press-releases/2021-09-21/haptic-technology-market-size-to-reach-usd-25240-million-by-2027-at-cagr-145-valuates-reports https://builtin.com/artificial-intelligence/haptic-technology https://www.smithsonianmag.com/innovation/heres-what-future-haptic-technology-looks-or-rather-feels-180971097/ https://www.sciencedaily.com/releases/2021/10/211026124310.htm https://www.imperial.ac.uk/engagement-and-simulation-science/ourwork/research-themes/haptic-technology/ https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2933867 https://www.weforum.org/agenda/2021/08/real-world-laws-ar-and-vr/

ENIGMA: Computer Science, Physics & Engineering Magazine |

35


ENIGMA

Computer Science, Physics & Engineering Magazine

www.habsgirls.org.uk @habsgirlsschool /habsgirlsschool @habsgirlsschool

www.habsboys.org.uk @habsboysschool /habsboys @habsboys

Haberdashers’ Elstree Schools Butterfly Lane, Elstree, Hertfordshire WD6 3AF 020 8266 1700 / 020 8266 2300


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.