CyberTalk Issue 10

Page 1

AN SBL PUBLICATION

Issue #10

Autumn 2018


SBL’s Services Team offer a range of professional and cybersecurity services. Our team of highly-qualified experts can develop services tailored to address your unique IT challenges, whatever size your organisation and however complex your requirement. SBL created a secure, responsive and fully integrated mobile device management service, bespoke to DEFRA’s needs and environment:

REQUIREMENT Manage up to 25,000 mobile devices across 3 agencies

Cloud Based Solution

SOLUTION 1 central management console Hosted in SBL’s secure datacentres Proactive monitoring of all connections by SBL ensures availability - maintained at more than 99% since the service was launched 24-hour second and third-line service desks

“SBL has proved to be a responsive and supportive service provider. They are quick to adapt to new requirements and have delivered all projects on time and within agreed performance parameters” Graham Wells, Cloud Service Manager, DEFRA

24-hour major incident management

READ THE FULL CASE STUDY

Broaden your horizons with services from SBL. Talk to us today and discover how we can help you. services@softbox.co.uk

01347 812100


Hello reader. You are human. It is impossible for it to be otherwise. If otherwise, you would not be a ‘you’. You would be an ‘it’ or a ‘thing’, intrinsically incapable of reading. No other entity could be reading this. Only a human can comprehend the idea of ‘being’ or grasp the significance of the distinction. The existence of your comprehension of these statements is their truth.

Meaning belongs to humans. A scanner will recognise and transcode patterns, but it does not apprehend meaning. We are anthropocentric. Sense making is a human monopoly. The universe has no meaning other than that which we create.

Animals are superior to minerals, and humans are superior to animals. No amount of shared mammalian or even simian DNA changes this, nor any manifestation of dolphin intelligence.

The exclusively human self-conscious act of volition to think about thinking means we use the thing that we are thinking about to do the thinking about the thing that we are thinking about; I think.

You! Neither machine nor animal; human!

Imagine using the thing that we use to do the thinking about thinking to make a thinking machine. Give it body, legs, arms, hands, eyes, ears..? It can repair itself, change itself, reproduce itself. It is telepathic and gestalt. Imagine it organic, using chemicals to communicate between its sensors, processors and actuators..?

We judge animals through our characterisation of the approximation of their behaviour to ours. We anthropomorphise. Projecting our subjective sense of ourselves on to animals as the measure of worth. The subjective experience of pain in the phenomenology of a dog is unimaginable.Yet the dog is sentient in law because its behaviour in response to pain more closely resembles ours than that of the lobster we boil alive to eat.

What if it started to think about thinking and started to think about how the other beings in the world that use the thing that they think about to think about thinking, think…

We are the apex species. The pinnacle of creation. We alone are made in God’s1 image. Souls lit by the divine spark.

…what if it became a ‘you’?

Previously, we have had no reason to think otherwise. Although an infinite quantity of immortal monkeys across an infinity of time with an eternity of random keystrokes might perhaps create every possible finite text an infinite number of times, we await the collected works of the great bard of the baboons.

Are we human because ‘we’ can compose a symphony and ‘they’ cannot? Can you? What if… robots read? ...hello robot...

Reading is more than light assuming patterns as it bounces off a page bearing structured arrangements of abstract symbols. More than the visceral stimulation of sense organs and the conversion of sense organ stimulation to sense data. More than the transmission of sense data through the neurological system. Reading is about the creation of meaning.

1

Other gods are available.

Colin Williams, Editor in Chief

3


The Rise of the Machines An Abridged History of AI Computer based artificial intelligence has been the dream of computer scientists since the 1950s, but the quest to create a non-human intelligence stretches back to our earliest recorded history, such as the golems of Jewish folklore and the automata created by Hephaestus, the “divine blacksmith� of Greek mythology. Richard Hind Tutor of IT & Computing, York College

08-11

CONTENTS

CP[ZX]V CTRW) 0aT 8^C 3TeXRTb

12-14

0,,

weak passwords to be set such as ‘password’. Four out of five were guilty of predictable session numbers (ie the website URLs for activity sessions were consecutive) and all enabled Google to index runs, meaning the website URLs were open to search engine spiders, making it easier to find personal data on the public internet.

It’s these kind of issues that can expose 03 Welcome a brand to a breach. The MyFitnessPal

incident back in February saw the user Colin Williams names, encrypted passwords and email

THE FUTURE $ 00/ 26-29

SPECIES 8

: 50 33-35

Dr. Daniel Dresner

46

Machines, Minds, and Mandrakes: New Narratives for an Old Battle

Keith Scott

Cyborg was the human exogenically modified to negate vulnerability to the lethality of space. If the problem was no oxygen; remove the need to breathe. Cyborg “proposed that man should use his creative intelligence to adapt himself to the space conditions he seeks rather than take as much of the earth environment with him as possible�.

The Humanity+ group states that they are an “international nonprofit membership organization�, promoting “the ethical use of technology, such as artificial intelligence, to expand human capacities�3. It publicly advocates transhumanism in general, and radical life extension in particular. Humanity+ has “theoretical interests� that “focus on posthuman topics of the singularity, extinction risk, and mind uploading (whole brain emulation and substrate-independent minds)�4.

Whilst the cyborg was a means to a limited end, this goal was far from the end or limit of its Fathers’ cosmic dreams. They conceived the cyborg as the product of “participant evolution�. This was the start of the conquest of nature by science.

Nick Bostrom is a founding member of Humanity+, and its direct precursor the World Transhumanist Association. Professor Bostrom is the inaugural director of the Future of Humanity Institute in the Faculty of Philosophy at the University of Oxford. He has been called to give evidence to a House of Lords Select Committee.

The augmentation of the human organism by “means of cybernetic techniques�, and the advancement to cyborg was expedient because the improvements required for space travel “cannot be conveniently supplied for us by evolution; they have to be created by man himself�. Through science and technology humans will compensate for the delinquency of nature. With “suitable biochemical, physiological, and electronic modification of man’s existing modus vivendi�, it will be possible to change “bodily functions to suit different environments� at will, and without “alteration of heredity�.

and the 36-41 Legacy of Alan Turing

The Morman Transhumanist Association asserts that it is “the world’s largest advocacy network for ethical use of technology and religion to expand human abilities, as outlined in the Transhumanist Declaration and the Mormon Transhumanist Affirmation�5.

The Turing Church proclaims itself “a community of seekers at the intersections of science and religion, spirituality and

technology, engineering and science fiction, mind and matter, Sending a man into space, or to the moon, was a necessity paid scant consider regard to these the user’s security. Default saw the user agents as part of asettings decentralized, distributed, with shared cosmic visions�. They offer a “new cosmic, compelled by the imperatives of the Cold War. The transhumanist religion�, predicting that they “will go to the “challenge of space travel to mankind� demanded the of Swarm Intelligence1 emerged as a prominent sub-field. David J.beyondGunkel asked to consent to too many app permissions, encouraging then to stars and find Gods, build Gods, become Gods, and resurrect development prowess� well self-organized collective. The second is the concept of of its “technological the dead from the past with advanced science, space-time their current limits. Swarm intelligence derives its inspiration from the collective over-shareadaptation, personal data and one app allowed private runs to be viewed engineering and “time magic� . the definition for which comes from theHowever, work ofwith merely re-shaping humans, dissatisfied The UK Transhumanist Party heralds “a new political overthrowing the tyranny of biology, and achieving behaviour of a large number of individual (intelligent) in real time, so the victim could be tracked. Yes privacy settings were organisation in the UK, part of a network of similar groups mechanistic mastery of evolution; Kline and Clynes had the late Prof. John H. Holland5 around the world, committed to positive social change theirto minds on a greater prize. They were driven by a higher available but they were often so poorly signposted as to be difficultpurpose. agents that interact locally with one another and with their through technology�. And, to “the idea that we must improve ourselves and society using the most effective tools and they certainly weren’t enabled by default, so mobileSpaceapp environment – moreover, it is through this collaborationimplement that “Adaptation available – to go beyond what we have been, in order to travel called humankind to the “spiritual challenge to is anyviaprocess whereby a structure overcome the world’s problems and create a better future� . take an active part in his own biological evolution�. Because, data In could be easily accessed the website andthe used to target isthe beyond the ephemeral and earthly vicissitudes of the Cold they achieve a common goal. considering functions progressively the modified to give of better performanceWar, in“existence its in space may provide a new, larger dimension Zoltan Istvan founded the American Transhumanist Party

report data. Fitness are a good 26 AI trackers and the Legacy of example, gathering Alan Turing user if they had left the app in default mode. data on running or mindenvironment� or the brain we find certain Dermot Turing The local interactions between the individual agents is To their credit, all ofwhich the appswe we looked at DID implement SSL, cycling routes and our operations can explain in purely often captured by simple rules, what is far more interesting ensuring data between the website and the app was encrypted. Thispassed definition implies that intelligent agents must be able to mechanical terms. This we say does not is the absence of kind of Command & Control structure However, this was rendered due to the fact they all allowed training routines. Cyber recognize events,pointless and then take appropriate actions 30 within Crowdsourcing

48 Are we really ready to start automating security or are we finding a way to do security badly? Ellie Hurst 50 Phishing Your Employees in a Positive Light Melanie Oldham 52 Me? I’m Just A Lawnmower

55 58 that dictates how the individual agent should behave; these correspond to the real it isevents a sort their environment basedmind: on those – in other words, kinds of systems are often referred to as a decentralized, of skin whichdesigners we mustwestrip if we aare as systems need off to provide definition for theTrust 12 autonomous, self-organizing systems. From their collective two concepts Observe and React. (local) behaviour emerges a global behaviour for the wholeto find the real mind. But then in what Ian Bryant we find a further skin to be swarm, which is often unknown to the individual agents – remains Notice that nothing we have mentioned states that these behaviour that we typically would call ‘intelligent’. stripped off, agents and somust on.have Proceeding in this as portrayed intelligent a physical presence entity such as “The Terminatorâ€?, is perfectlyThe Future of the Species 61 way by dothewefictional ever come to the ‘real’ mind, or it33 Note: Beni & Wang originally defined the term Swarm acceptable to have a collective eventually come to thewhose skin members which are purely Noel Hannan Intelligence for cellular robotic systems that consisted of â€œâ€Śdo we virtual entities, such as a piece of software that exists on the collections of autonomous, non-synchronized, non-intelligent has Internet. nothing in it? In the latter case the robots cooperating to achieve global tasksâ€?. whole mind is mechanical. 36 Transhumanism: 62 The best way to get a feel for the capabilities of the Nature has many examples of such collaborations, beehives,Alan M. Turing is to take a look at this short video, where collective, dronesTranscendence or ant colonies, termite mounds, etc. In addition, this natural Computing Machinery to andbehave Intelligence are programmed like a swarm – notice the way ability to swarm also exists among birds, fish, bacteria, etc. (1950) 59 navigate Mind 433-460 they past an obstacle . Termination? Understanding Termite Mounds provides some nice in 2014. Article 3 of its “Transhumanist Bill of Rightsâ€? requires that “human beings, sentient artificial intelligences, cyborgs, and other advanced sapient life forms agree to uphold morphological freedom—the right to do with one’s physical attributes or intelligence (dead, alive, conscious, or unconscious) whatever one wants so long as it doesn’t hurt anyone else.â€?8

for man’s spirit.�

Cyborg was the transcendent transformation of humans and humanity. Release from the shackles of genetics. Selfemancipation of the self-evolved, (self-professed) superior, being from the confines of the caged cradle of infancy. The manufactured transmogrification of human to cyborg by the subjugation of evolution to human will was a means to a great end. Cosmic immortality.

Istvan stood against Donald Trump in 2016, running his campaign from his Immortality Bus. His campaign slogan: “Death is not Destiny�. He will seek election as Governor of California in 2018. Istvan’s website carries a quote attributed to the BBC describing him as “the physical embodiment of the Californian, libertarian, start up culture tech-utopian dream.�9

The 1960’s conceptualisation of cyborg is rooted within the central body of thought we now know as transhumanism.

Within the broad spectrum of contemporary transhumanist thought and behaviours, there are multiple transhumanist bodies. There is an established transhumanist movement. It is international, and it is growing. It has become unwise to dismiss transhumanism as marginal, fringe, eccentric or irrelevant.

42-45

36

John Connor

6

7

1

Rational Thinking

addresses of around 150 million users

exposed. It’s not yet clear how the breach 08 The Rise of the occurred but we do know some accounts were encrypted using the now outmoded Machines: SHA-1 algorithm An leading Abridged parent company, Under Armour, to suggest users should Munro History of AI change their passwords. Ken Thankfully, users are now starting to Partner Hind Pen Test PartnersRichard appreciate it’s not just the app data that’s at risk in the event of a breach. They now realise that these services are 12 Talking Tech: Are IoT part of their online presence with one MyFitnessPal Too user stating, “I use a lot of Devices Chatty? the things like Facebook, a lot of social LinkedIn‌ One of my concerns is Kenmedia, Munro that everything is connected to the same types of emails and passwordsâ€?. 15 Data in the It’s thisSecurity interconnected ecosystem Communication is at the heart of the Internet of Things. that poses the greatest threat to our Devices need to be able to ‘talk’ to the applications Post-Truth World personal data. As these devices begin to that control them, such as mobile apps or web servers, communicate not just with the network Dr. Felix and also to each other to relay information, butHovsepian is the Dr. Char Sample but with one another, the potential for way they do so secure enough? Should we bePh.D moreDermot Turing compromise increases. concerned about the data being relayed? What threat 20 Swarm Intelligence could this pose to the user if it is intercepted?  ŒÂ?¤¤¥Â&#x;Â?Œ›Â? ^ & Cyber Security TRANSHUMANISM We are now welcoming IoT devices into our lives which track us, collect information and report data. Fitness trackers are a good TRANSCENDENCE OR TERMINATION? Dr. Felix Hovsepian example, gathering data on running or cycling routes and our training {¹šÂ?ÂŞ ‹Â?›­ª¥¹ routines.Vulnerabilities within these devices have been exposed before We are now welcoming but what is often overlooked is the application or web server it’s 23 Why of Service connected to. IoT Terms devices into our What We Can Nevertheless, there are two concepts that are of significant; Suck, Introduction FIT FOR PURPOSE livesand which track us, the first is that of autonomy, which means each agent acts Biologically inspired Artificial Intelligence is a vibrant and rightly Docollect About It autonomous manner, and therefore We lookedinatanfive fitness apps controlled by mobile appsone and can found they information and exciting field of study, and in the late 1980s the concept

o f t h e20-22

42

58-60

The Singularity University was founded in 2008 by Peter Diamandis and Ray Kurzweil. Diamandis leads a number of space exploration and space tourism companies; including Planetary Resources. An enterprise devoted to developing and using the technology for asteroid mining. Kurzweil is the populariser of the idea of the technological singularity, a prominent advocate of transhumanism and, since 2012, a “Director of Engineering� at Google.

37

Securing Access & Market Share Tom King

Industry 4.0 – The Accumulative Risks David Bird

Panic Not! Dr. Daniel Dresner

Almanac of Events

6

2

examples of the kinds of structures that termites are capable of building, please note that no two (termite) mounds are the same. Dr. Louis Rosenberg has undertaken some very interesting research where he reviewed the biological foundation of swarm AI, and produced some examples of how humans can swarm by building a distributed Hive Mind3.

Intelligent Agents & Swarm Technology

Boids

One of the earliest examples of swarms appeared with the work of Craig Reynolds in 1986, a software simulation in which he was able to model the flocking behaviour of birds.

Colin Williams

Reynolds called this “Boids� [7] and you can watch the original Boids simulation in this video [8]. The word “Boid� refers to a bird-like object – technically, Boids are examples of what have now became known as “Artificial Life�.

Intuitively, an intelligent agent is a goal-directed, adaptive entity, which has the ability to learn and reason in order to achieve its goals in an optimal manner4.

EDITOR IN CHIEF

C O N T R I B U T20O R S

DESIGN

Colin Williams

David Bird Ian Bryant John Connor Cory Doctorow Dr. Daniel Dresner David J. Gunkel Noel Hannan Dr. Felix Hovsepian Richard Hind Ellie Hurst Tom King Ken Munro Melanie Oldham Dr. Char Sample Keith Scott Dermot Turing Colin Williams

Ellen Longhorn Design www.ellenlonghorndesign.co.uk

EDITOR Prof. Tim Watson

A Noel R T D K. I RHannan ECTOR & BSc D I G MSc I T A LCISSP E D I TVR OR

Principal MichaelaConsultant Gray at C3IA Solutions

SUB -EDITORS

Melissa Hartman Michaela Gray Natalie Murray 33

CONTACT US

26 General enquiries: +44 (0) 1347 812150 Editorial enquiries: +44 (0) 1347 812100 Email: cybertalk@softbox.co.uk Web: www.softbox.co.uk/cybertalk

cybertalkmagazine

CyberTalk is published one a year by SBL (Software Box Ltd). Nothing in this magazine may be reproduced in whole or part without the written permission of the publisher. Articles in CyberTalk do not necessarily reflect the opinions of SBL or its employees. Whilst every effort has been made to ensure that the content of CyberTalk magazine is accurate, no responsibility can be accepted by SBL for errors, misrepresentation or any resulting effects. Established in 1987 with a headquarters in York, SBL are a Value Added IT Reseller widely recognised as the market leader in Information Security. SBL offers a comprehensive portfolio of software, hardware, services and training, with an in-house professional services team enabling the delivery of a comprehensive and innovative range of IT solutions. CyberTalk is designed by Ellen Longhorn Design and printed by GKD Print Ltd.

@CyberTalkUK

4


Imagine

using the thing that we use to do the thinking about thinking to make a thinking machine. Give it body, legs, arms, hands, eyes, ears..?

5


“HERE’S HOW MACHINE LEARNING (ML) WORKS BEST IN IDENTIFYING SECURITY THREATS” Dr Jon Oliver Data Scientist Trend Micro

ML emerged and almost as quickly it assumed the mantle of a new “next-generation” tool. The story is a little more nuanced. Some well-established security companies, including Trend Micro, have worked with ML for more than a decade. Until recently, they tended not to discuss this openly, mainly because of the concerns that the technology, applied on its own, flagged too many false positives. More recently, two things have happened, and I believe they are correlated. One is the rise of ransomware like CryptoLocker. The other is the emergence of ‘next-gen’ vendors promoting ML as the “new” control companies must have to tackle advanced threats. Many organisations working with established security companies will, in fact, have been applying ML in their solutions for many years. Ransomware changed the game because it made timing a critical part of malware detection. Other types of malware might try to steal intellectual property or start a spambot. Catching them an hour or so after first infection — having vastly minimised the chance of false positives first — may have been an acceptable trade-off. With ransomware, however, there is no room for manoeuvre. The moment it encrypts files and locks victims out of their data, it starts to cause financial damage and business disruption. Catching it at ‘time zero’ is critical.

The fact is, ML is ideal for tackling those critical ‘time zero’ issues like ransomware, but it still leaves the possibility of false positives. ML is best used after other security methods have been applied — and further meta data about the context of the file has been collected. ML is excellent for processing files where the context suggests that they are more suspicious such as those files that arrive via email, downloads or infected USB sticks. Other security layers, a dynamic whitelist and context can be used to make sure that the ML is given minimal opportunity to mistakenly flag good files. The volume of good and bad files to scan is increasing exponentially. We need to augment our current systems of detection to cope with this level of activity. Historically, malware detection has looked in the rear-view mirror. We needed a virus sample before we could develop an antidote. But many malware samples we get today are unique. A new instance of Cerber ransomware is created every 15 seconds. We have seen a similar effect with benevolent software. The growth of DevOps and the cloud means that new versions of legitimate software such as Google or Dropbox updates appear on an almost hourly basis. Driving by looking backwards is impossible when the terrain changes so fast. We need ML, and ultimately AI, to change that paradigm by protecting against threats we have not yet seen. I am a huge advocate for ML, but no one solution will solve all security problems — it never has. We have already seen cybercriminals experimenting with modifying programs to beat ML.

Around the same time as ransomware started becoming prominent, ‘next-gen’ vendors began actively promoting ML. It makes sense to harness artificial systems to recognise malware in a climate where threats are multiplying. But getting this right, and minimising false positive errors in the process, is not trivial. 6


trendmicro.co.uk/business/xgen/

7


Computer based artificial intelligence has been the dream of computer scientists since the 1950s, but the quest to create a non-human intelligence stretches back to our earliest recorded history, such as the golems of Jewish folklore and the automata created by Hephaestus, the “divine blacksmith� of Greek mythology. Richard Hind Tutor of IT & Computing, York College

8


Charles Babbage is familiar to most people with an interest in computing. In the 1820s he designed his first Difference Engine - intended to do the work of human “computers” and eliminate the potential errors from the solving of complex equations. Later he conceived a more sophisticated machine, his Analytical Engine, which could be instructed to automatically carry out a variable sequence of mathematical operations via punched cards (as used by the Jacquard weaving loom from 1804) and produce printed output. Essentially it had all the components of a modern computer but operated entirely mechanically.

From those earliest dreams of man-made intelligence, we now face the potential nightmare of the AI “Singularity” – the point at which the machines become more intelligent than their creators and start a runaway process of rapid development.

Less well known is that in 1819 Babbage played chess against the infamous “Chess Turk” (originally built in 1770). Although the Turk1 was a clever hoax, more showmanship than technology, it is said to have set Babbage thinking about the possibility of machine intelligence. Could a pure machine be instructed to play a game of strategy? Sadly Babbage’s funding ceased and his machines remained on the drawing board. It wasn’t until 1956 that a team of H-bomb researchers at Los Alamos, working on John von Neumann’s first MANIAC (computer) machine2, were able to deliver a genuine chess playing machine, although it could only cope with a 6x6 board without Bishops or the castling rule! This machine needed an average of 10 minutes to make a single move, only looking 2 moves ahead and was only able to beat novice players. The sophistication of chess playing machines continued to improve as the processing capability of computers grew exponentially, from MANIAC’s eleven thousand operations per second, to over eleven billion (floating point) operations per second of IBM’s Deep Blue machine, which famously beat chess grand master Gary Kasparov during a 1997 re-match. Chess is just one very narrow example of intelligence, it has well defined rules and strategies that can be coded in a linear / procedural way.Yet to play at the level of a grand master, Deep Blue had to employ cutting edge deep learning techniques. We are finally creating machines that demonstrate intelligence across many domains and learn in ways that even their creators do not fully understand. Alpha Go, for example, appeared to have learned how to bluff in order to win a game of Go against a world champion while experimental Facebook chatbots recently had to be shut down after they started communicating in a language that was only intelligible to the bots themselves! From those earliest dreams of man-made intelligence, we now face the potential nightmare of the AI “Singularity” – the point at which the machines become more intelligent than their creators and start a runaway process of rapid development. In recent years we have heard from luminaries including Elon Musk and Professor Stephen Hawking about the dangers to humanity, although Ray Kurzweil (widely recognised as the “father of AI”) thinks this is rather preemptive and the singularity will not arrive until at least 20453.

9


My own third year project at university (1991) involved designing a computer processor optimised to run neural network simulations for pattern recognition, so this is a subject close to my heart. The recent surge of interest in AI set me reflecting on some of the significant milestones4 that have stuck in my mind and led us towards the realisation of a true AI. This is by no means an exhaustive list!

2002 The Roomba is launched by iRobot. It vacuums the floor while “intelligently” navigating and avoiding obstacles, although it is prone to getting itself stuck in corners! 2009 Google unveils its self-driving car, which employs a laser range-finder mounted on the roof, combined with highresolution maps of the world and GPS to allow it to navigate safely and obey local traffic laws. It also carries other sensors, including: four radars, a camera (to detect traffic lights), an inertial measurement unit and wheel encoders. As for the processor hardware Google keeps it close to their chest, but in 2016 they did reveal that the current solution includes a number of ASIC (customisable logic) chips together with Nvidia (graphics card) processors and (possibly ARM - as used in mobile phones) processors.

1950 Alan Turing’s Automatic Computing Engine (ACE) is realised at Manchester University and Turing begins experimenting with AI. He writes the paper ‘Computing Machinery and Intelligence’ which introduces the idea of a standard test for machine intelligence, which he referred to as the imitation game and now known as the Turing test. 1959 Bernard Widrow and Marcian Hoff at Stanford University develop an artificial neural network using analogue electronics, known as ADALINE or “adaptive linear neuron” which can “learn” using their LMS or “least mean square” algorithm. However by the end of the 1960s this work appears to grind to a halt when Minsky and Papert show that a “perceptron” is unable to learn something as simple as an exclusive or logic function (a simple digital circuit that only produces an output when its two inputs are different), referred to as the XOR problem.

2010 Professor Jim Austin explains to me his concept for the “Cool Computer” which could revolutionise the construction of massively parallel machines and I later join his research group at the University of York as a part-time MSc student. 2011 IBM’s Watson computer plays the TV game show Jeopardy! and convincingly defeats the reigning champions. Watson is based on IBM’s POWER7 processor. It contains 720 cores in total and delivers 80 trillion (floating point) operations per second – four orders of magnitude over Deep Blue.

1965 Joseph Weizenbaum at MIT creates ELIZA, a program that chats (via a computer terminal) in plain English. Later this is developed into an interactive “psychotherapist” which was intended to “respond roughly as would certain psychotherapists (Rogerians)” according to Weizenbaum. People were even able to access this via the prototype Internet!

2013 The start of the Human Brain Project which includes the many-core SpiNNaker machine at Manchester University. It uses half a million ARM processors with a packet-based network for inter-node communication.

1972 MYCIN, one of the first “Expert Systems”, is developed at Stanford to assist with medical diagnosis. It uses a new generation of computer language (LISP) which moves away from conventional coding and instead relies on defining rules and facts which are processed by an inference engine. Expert Systems prove to be the most successful branch of AI for many years to come.

2015 Google’s DeepMind Technologies develop a system that is able to learn how to play classic Atari video games using only pixel data as its input. 2017 DeepMind Technologies program AlphaGo chalks up 3 wins against world Go champion Ke Jie. Later he said “There was a cut that quite shocked me… But, afterwards I analyzed the move and I found that it was very good. It is one move with two or even more purposes.”

1982 John Hopfield of Caltech presents a paper to the National Academy of Sciences suggesting a new approach to training neural networks which involves passing an error signal backwards through the network, now known as back propagation. This discovery reignites interest in neural networks. Meanwhile Japan’s Ministry of International Trade and Industry initiates the Fifth Generation Computer Systems project, with a focus on massively parallel machines.

The emergence of “big data” and the semantic web has undoubtedly fuelled the recent expansion in the capabilities of deep learning AI. One of the problems with neuralnetwork based systems had always been the limited size of training sets available, but this is no longer the case with the web providing a wealth of machine searchable data.

1986 The term Deep Learning is first introduced, by 2000 this will become a widely used term.

However, a last word of caution from SingularityHub.com: “Research suggests artificial intelligence may be uniquely susceptible to tricksters, and as its influence in the modern world grows, attacks against it are likely to become more common. The root of the problem lies in the fact that artificial intelligence algorithms learn about the world in very different ways than people do, and so slight tweaks to the data fed into these algorithms can throw them off completely while remaining imperceptible to humans.”

1994 Draughts world champion Marion Tinsley resigns in a match against the computer program Chinook. Chinook goes on to win the USA National (Checkers) Tournament “by the widest margin ever.” 1998 The Furby is released, the first successful attempt at producing an AI for the domestic environment. In the same year Sir Tim Berners-Lee publishes his proposals for the Semantic Web.

10


If we are to trust the future of cybersecurity to AI based systems then we need to be cautious, as even their developers still have much to learn about their own creations, however impressive the current achievements may be. As a former special operations reconnaissance team leader in the US army recently said to New Scientist magazine5 “The problem with AI is that it’s brittle. It can go from super-smart to super-dumb in an instant, making mistakes that are jarringly stupid for a human.”

The emergence of “big data” and the semantic web has undoubtedly fuelled the recent expansion in the capabilities of deep learning AI.

With thanks to Elise Bikker, PhD research student at the University of York, for providing background on the early “thinking machines”. For further reading see: 1 Tom Standage (2002) The Turk:The Life and Times of the Famous Eighteenth-Century Chess-Playing Machine, First Edition,Walker & Company 2 Computing and the Manhattan Project [Available online at:] https://www.atomicheritage.org/history/ computing-and-manhattan-project 3 Creighton, J. (2018) The “Father of Artificial Intelligence” Says Singularity Is 30 Years Away [Available online at] https://futurism.com/father-artificial-intelligence-singularity-decades-away 4 Timeline of artificial intelligence [Available online at:] https://en.wikipedia.org/wiki/Timeline_of_ artificial_intelligence 5 Hambling, D. (2018) US is seeking smart killer drones. New Scientist, 31 March 2018 [p. 8]

11


weak passwords to be set such as ‘password’. Four out of five were guilty of predictable session numbers (ie the website URLs for activity sessions were consecutive) and all enabled Google to index runs, meaning the website URLs were open to search engine spiders, making it easier to find personal data on the public internet.

Ken Munro Partner Pen Test Partners

Communication is at the heart of the Internet of Things. Devices need to be able to ‘talk’ to the applications that control them, such as mobile apps or web servers, and also to each other to relay information, but is the way they do so secure enough? Should we be more concerned about the data being relayed? What threat could this pose to the user if it is intercepted? We are now welcoming IoT devices into our lives which track us, collect information and report data. Fitness trackers are a good example, gathering data on running or cycling routes and our training routines.Vulnerabilities within these devices have been exposed before but what is often overlooked is the application or web server it’s connected to. FIT FOR PURPOSE We looked at five fitness apps controlled by mobile apps and found they paid scant regard to the user’s security. Default settings saw the user asked to consent to too many app permissions, encouraging then to over-share personal data and one app allowed private runs to be viewed in real time, so the victim could be tracked.Yes privacy settings were available but they were often so poorly signposted as to be difficult to implement and they certainly weren’t enabled by default, so mobile app data could be easily accessed via the website and used to target the user if they had left the app in default mode. To their credit, all of the apps we looked at DID implement SSL, ensuring data passed between the website and the app was encrypted. However, this was rendered pointless due to the fact they all allowed 12

It’s these kind of issues that can expose a brand to a breach. The MyFitnessPal incident back in February saw the user names, encrypted passwords and email addresses of around 150 million users exposed. It’s not yet clear how the breach occurred but we do know some accounts were encrypted using the now outmoded SHA-1 algorithm leading parent company, Under Armour, to suggest users should change their passwords. Thankfully, users are now starting to appreciate it’s not just the app data that’s at risk in the event of a breach. They now realise that these services are part of their online presence with one MyFitnessPal user stating, “I use a lot of the things like Facebook, a lot of social media, LinkedIn… One of my concerns is that everything is connected to the same types of emails and passwords”. It’s this interconnected ecosystem that poses the greatest threat to our personal data. As these devices begin to communicate not just with the network but with one another, the potential for compromise increases.

We are now welcoming IoT devices into our lives which track us, collect information and report data. Fitness trackers are a good example, gathering data on running or cycling routes and our training routines.


SAFE AS HOUSES The problem is that the home is assumed to be a secure environment in itself, with many manufacturers releasing IoT devices that don’t use any form of encryption. These devices communicate with the home network using plain text messaging and are, in effect, relying on the security of the router to provide security for them. If the Pre-Shared Key (PSK) for the WiFi is compromised, those devices have no further security and are defenceless. Numerous IoT devices use a WiFi access point or Bluetooth Low Energy (BLE) to allow the app to provide the user’s SSID/PSK for connection. Some devices using BLE require no PIN whatsoever to make that connection, which is why devices such as the My Friend Cayla doll, the Fender Mustang GT 100 amp, and the Nespresso Prodigio and Jura E8 coffee machines have been criticised. No PIN equals no authentication so anyone can use their phone to access that connection, communicate over it and use a doll to speak to your kids, make your guitar play Rick Astley or pour a coffee. Even when a PIN is used for BLE handshakes, the tendency is to use a default numeric code (such as 0000 in the case of the Lovense Nora sex toy), which can be readily obtained online. What’s frustrating is that BLE is secure if implemented properly. The problem could be easily remedied by having a button on the ‘thing’ to force it to pair with the device using a PIN specified by the user. When it comes to WiFi, most devices require the user to enter their PSK and it’s here where issues arise. For instance, the first generation of the Ring WiFi doorbell allowed the PSK to be stolen from outside the property simply by taking off the door mounting and connecting to it to trigger a request from the web server. The configuration details returned included the configured SSID and PSK and in clear text so there wasn’t even any need to crack them. Another example was an earlier incarnation of the Smarter WiFi kettle, again allowing PSK extraction over WiFi in plain text. To obtain the key all we had to do was de-authenticate the kettle from its usual access point, create a fake access point with the same SSID and wait for the kettle to join it. We were then able to authenticate using a default PIN/key and the plaintext PSK for the network was then disclosed.

13


Once you have the PSK, you essentially have access to that user’s world via their router and can go on to harvest their log-in details for accounts used via that internet connection. It’s also possible to compromise the user’s other devices via the router. Change the router DNS settings and you can redirect users to websites hosting cloned Android apps containing malware, enabling the attacker to infect the user’s tablet or phone.

Overly liberal or elevated permissions can cause major issues in the IoT. By making the device too receptive to connections you essentially dumb down the security, making it possible for the hacker to craft code and carry out illicit activity. The University team went on to create an app for the Samsung SmartThings platform that masqueraded as a battery level monitor when it was in fact eavesdropping on the PIN code for the front door which the app captured and texted back to them.

HACKABLE HUBS

When it comes to overly permissive permissions, mobile apps are undoubtedly the biggest offenders yet worryingly this issue hasn’t been covered in the Government’s latest advice for IoT manufacturers in its ‘Secure by Design’ review. Look at the T’s and C’s of any major app provider and you’ll find wide ranging permissions and the same goes for IoT apps, with many granting access to geolocation data, granting permission to make calls, and to collect audio and video.

Poor device security is being further exacerbated by some home hubs which provide a trusted mode of access to these devices. If that hub isn’t robust enough or grants too many permissions, it then acts as a single point of compromise to a whole host of smart tech. (Although one should point out there are smart hub market leaders out there that are improving network security by virtue of the controls they use.)

If those permissions are too accommodating it can lead to some interesting attacks. The Fender Mustang GT 100 amp mobile app previously mentioned also allows the user to download presets from other artists, enabling the user to create a set list with presets for each song. Because Fender failed to include permissions-based security, it’s possible to push a preset to the amp unchallenged.

One example of a hackable hub is the Samsung SmartThings platform which researchers at the University of Michigan were able to hack and gain control of the entire smart home system. The platform trusted the connection from devices to such a degree and gave such wide ranging access that it allowed them to use existing apps to send rogue commands and even create an additional one to fool the system.

Capturing traffic from mobile apps can also allow the attacker to reverse engineer them to determine key controls. We took this approach with the Mitsubishi PHEV SUV and were able to identify the binary protocol used for messaging allowing us to turn the lights on and off, change the charging rate, switch the aircon on and off, and deactivate the car’s alarm… all via the mobile app. Sophisticated equipment such as the connected car is now joining other devices that make up our personal IoT ecosystem, increasing the attack surface. It’s now vital we realise that it’s not just the security of each device but HOW they connect, communicate and relay data that needs our attention. Today, regulation seems to be one step behind, with guidelines failing to consider how these devices might impact one another. It’s down to the security industry to increase awareness and stop tech talking indiscriminately.

Capturing traffic from mobile apps can also allow the attacker to reverse engineer them to determine key controls.

14


Dr. Char Sample Technical Director Fellow ICF International

15


2016 ushered in the post-truth world for the West. While the political events were certainly worthy of attention, the posttruth environment has existed in other nations (Estonia, Ukraine, etc.) prior to 2016. Now that the post-truth environment has the attention of the West, the question of the best approach to the problem is of critical importance.

Why should cybersecurity be concerned with Wetware hacking? The recent news stories about Cambridge Analytica and social media have many people concerned about their personal information being available, but the problem is significantly larger than the revelation of PII, the real problem is revelation of users personal thoughts. The physical world equivalence would be if a group of behavioral scientists read, analyzed and predicted a person’s behaviours by reading their diary. Considering the vast amounts of data available on user preferences, beliefs and values, the application of big data algorithms makes Wetware hacking a manageable task. Empowered with this information on preferences and values an organization can with the aid of artificial intelligence, create the discussion topic framed in points that elicit the desired reactions. This ability enables one of the primary goals of Wetware hacking in support of hybrid warfare: have the target do the work for the source. One question that must be considered in this discussion, is the truth important? If so, who determines what is true and what is false? The traditional “truth tellers”, the free press, is no longer considered the guardians of truth. The press has shown that just like the public, they too can be manipulated. Furthermore, as the 2017 Trend Micro Report on ‘Fake News’ showed, reporters can be discredited, protests and other events can be staged and reported as news, all for a fee. Online content carriers have long argued that they are not responsible for content, implying that the truth does not matter. These providers argue that their job is to deliver the data not pass judgment upon it. If we place our trust in whoever gives us the data, and they claim they are not responsible for what they give us, the proverb “trust but verify” becomes a rule to live by.

The scope of post-truth data extends beyond the geopolitical realm and as noted in a previous article (CyberTalk 2017), post-truth data can occur within security systems, embedded devices and any system that depends on the fidelity of digital data for storage, processing and transfers. This forces us to address several topics regarding data, information and security in the post-truth world. So serious is this problem that NATO and the EU have taken an interest in combatting deceptive data. In 2017 NATO launched the Hybrid Centre of Excellence (COE) for Countering Hybrid Threats https://www.hybridcoe.fi/. This site provides an opportunity for policy makers and scholars to gather information necessary to address these emerging threats in a unified manner.

The attack on trust indirectly attacks truth, but if the truth is still important, then the truth must be sought in manners that are less dependent upon trust and are based on shared knowledge instead. Can the truth be quantified? Can diversions from ground truth be measured and ultimately evaluated.

In anticipation of this site coming online Russia created a deceptive site http://hybridcoe.ru/ that bears a striking resemblance to the Finnish site. The webpage backgrounds are the same and the overall goals appears, at first glance, to be the same. This Russian site claims to promote peace, friendship and information sharing with the desire to address deceptive data. In reality, the site links to deceptive data.

The destruction of the entity trust model upon which security was built, has been compromised. The proposed solutions range from content censorship to anarchy, or simply ignoring the problem. In reality, a new paradigm is needed to address this problem and it requires all of us, security professionals in particular, to challenge some of our fundamental assumptions.

The traditional security response would be to warn others of the fake site and attempt to isolate the site. In fact, Twitter has already suspended the advertised account. However, by suspending the account we miss an opportunity to study misinformation at the source. We may also miss an opportunity to gain valuable insights into the true goals of the mission of those who spread deceptive data.

The fundamental trust assumption has been shown to be a problem. The Internet has grown and changed dramatically since the early days, yet the same trust models that were used at inception remain in place. Over the course of the previous decades, extraordinary work was performed in the area of trust. Arguably this work was performed at the expense of verify. If the truth really matters, then verification will become as important as trust.

One common goal of deceptive data is Wetware hacking. Wetware hacking, is the updated term to replace Cybenko’s original term from 2002, cognitive hacking. This form of hacking is relatively inexpensive and places the human at the end of the computer as the target. Wetware hacking has several goals, that include confusion, paralysis, demoralization, subversion and in some cases blackmail. Ultimately, the central goal is to prevent the target from making a specific decision, abandoning the initial choice. Should the target do nothing or change their choice the objective has been met.

16


Not only does the truth matter, in order for the truth to be verified the data must be examined in the context of its environment. The relationship between truth and environment must be deeply and fully understood. Once accomplished the truth can be made resilient. When artificial environments shape and reinforce a user’s perception of the truth, a resilient truth may become an anchor of trust. This trust anchor could become a welcome entry in a world where truth has become subjective.

Not only does the truth matter, in order for the truth to be verified the data must be examined in the context of its environment.

In their current design, encryption, digital signatures or the block-chain cannot solve the data verification problem. Each of these solutions place trust in an entity not the package that the entity delivers. In the post-truth world lying is simply an alterative view. Thus, data must be evaluated within context. The environment in which the data is created is as important as the data, and the variables that represent the environment must be captured and evaluated along with the data. This will mean that our security solutions must become aware of the environment that they support. The capture of the necessary environmental data has precedence in other disciplines such as resilience, reliability and dependency modeling. These disciplines recognize the role of the environment, and the interaction between the events that data represents and the environmental variables that are intertwined with the data. Resiliency models and the perturbations that result from the interaction between the data object and the environment. This interaction can be thought of as the ripple that occurs when a stone is thrown into a pond, where the stone represents the object and the water the environment. The displacement that occurs and how that is measured and defined is the resiliency component. All of these disciplines support the modeling of objects and their related environmental variables as they support a specific mission. Historically, each of the above mentioned disciplines have not been fully investigated in security research. However, now that the information domain has been shown to be vulnerable there is renewed interest in the insights that they add. These disciplines among others add support to the idea that cyber security must be a cross-discipline endeavor. Cyber security crosses many boundaries, the arguments for cross-discipline research that can support all aspects of the information domain offers our best chance at countering the post-truth reality where we currently reside. In this new environment where data is the new currency, the ability to value this new currency is increasingly important. The information domain requires a new set of solutions that can address the problems that are unique to this domain. These solutions require us to change our paradigms, and to apply models from other disciplines, in our desire to make the truth easily distinguishable from deceptive data.

17


THE PROBLEM WITH FIREWALLS

Chris Green Senior Product Marketing Manager, Sophos

WHY “MORE OF THE SAME” IS NO LONGER AN OPTION

This shift from ports and protocols to applications and users has spawned a new category of network protection, socalled “Next-Generation” Firewalls that include deep packet inspection of encrypted and unencrypted traffic, intrusion prevention, application awareness and user-based policies, alongside traditional stateful inspection techniques.

There’s an evolution underway in firewalls that’s different from any previous generation. Shifts in the threat landscape, a dramatic increase in the number and complexity of technologies that sysadmins have to deal with. A flow of data that is drowning the signal in the noise have created a perilous situation that is putting security at risk. A recent survey of IT administrators identified that most firewalls in use today:

As a result, modern firewall products have become increasingly difficult to operate and manage, often leveraging separate and loosely integrated solutions to tackle different threats and compliance requirements.

• Force admins to spend too much time digging for the information they need. • Don’t provide adequate visibility into threats and risks on the network. • Make it too difficult to figure out how to use all their features.

Poor integration can leave sysadmins with blindspots. The volume of data these systems produce can be enormous and the burden for the average network administrator has reached unsustainable levels.

Dealing with this situation means taking a radical new approach to network security: one that can enable security systems to work together; that simplifies and streamlines workflows; that cuts through the enormous volumes of data to identify what is important.

Network security demands a more thorough approach to the integration of complex technologies and a new breed of firewall is required: one that has been developed from the start to address the problems of existing firewalls and provides a platform designed specifically to tackle the evolving threat and network landscape.

HOW FIREWALLS MUST IMPROVE

So how did we get here?

This new type of firewall must deal with modern threats that are more advanced, evasive, and targeted than ever before. These advanced persistent threats (APTs) use techniques that create a new zero-day threat with every instance, presenting a serious challenge for signature-based malware detection. Modern firewalls must:

HOW FIREWALLS GOT WORSE AS THEY GOT BETTER Originally firewalls provided basic network packet filtering and routing based on hosts, ports and protocols. They enforced the boundary between a network and the rest of the world, and patrolled the boundaries within that network.

1. Identify malicious behavior and give you unprecedented visibility into risky users and risky behavior, unwanted applications, suspicious payloads and persistent threats. 2. Work with other security systems, such as endpoint solutions, operating as one to detect, identify, and respond to advanced threats quickly and efficiently. 3. Use dynamic application control technologies that can correctly identify and manage unknown applications, which signature-based engines miss. 4. Integrate a full suite of threat protection technologies so that network administrators can set and maintain their security posture at a glance.

These firewalls were effective at limiting the exposure of services to just the computers and networks that needed access to them, reducing the attack surface available to hackers and malware on the outside. Of course attackers don’t stand still so attacks evolved to exploit the services that firewalls left exposed: attacking vulnerabilities in applications and servers, or using social engineering to gain a foothold inside a network through email or compromised websites. Firewall technology evolved too, moving up the OSI stack to Layer 7 where it could identify and control traffic based on the originating user or application, and where deep inspection technologies could look for threats inside the content of application traffic.

Firewalls must regain their place as your network’s trusted enforcer, blocking and containing threats and stopping the unauthorised exfiltration of data. 18


XG FIREWALL

The world’s best visibility, protection and response. Stop Unknown Threats. Dead. Harness the power of deep learning, deep packet inspection and aggressive run-time analysis to stop threats before they get on your network.

www.sophos.com/firewall


Dr. Felix Hovsepian Ph.D

Introduction

Nevertheless, there are two concepts that are of significance; the first is that of autonomy, which means each agent acts in an autonomous manner, and therefore one can rightly consider these agents as part of a decentralized, distributed, self-organized collective. The second is the concept of adaptation, the definition for which comes from the work of the late Prof. John H. Holland5

Biologically inspired Artificial Intelligence is a vibrant and exciting field of study, and in the late 1980s the concept of Swarm Intelligence1 emerged as a prominent sub-field. Swarm intelligence derives its inspiration from the collective behaviour of a large number of individual (intelligent) agents that interact locally with one another and with their environment – moreover, it is through this collaboration that they achieve a common goal.

“Adaptation is any process whereby a structure is progressively modified to give better performance in its environment”

The local interactions between the individual agents is often captured by simple rules, what is far more interesting is the absence of kind of Command & Control structure that dictates how the individual agent should behave; these kinds of systems are often referred to as a decentralized, autonomous, self-organizing systems. From their collective (local) behaviour emerges a global behaviour for the whole swarm, which is often unknown to the individual agents – behaviour that we typically would call ‘intelligent’.

This definition implies that intelligent agents must be able to recognize events, and then take appropriate actions within their environment based on those events – in other words, as systems designers we need to provide a definition for the two concepts Observe and React. Notice that nothing we have mentioned states that these intelligent agents must have a physical presence as portrayed by the fictional entity such as “The Terminator”, it is perfectly acceptable to have a collective whose members are purely virtual entities, such as a piece of software that exists on the Internet.

Note: Beni & Wang1 originally defined the term Swarm Intelligence for cellular robotic systems that consisted of “… collections of autonomous, non-synchronized, non-intelligent robots cooperating to achieve global tasks”. Nature has many examples of such collaborations, beehives, ant colonies, termite mounds, etc. In addition, this natural ability to swarm also exists among birds, fish, bacteria, etc. Understanding Termite Mounds2 provides some nice examples of the kinds of structures that termites are capable of building, please note that no two (termite) mounds are the same. Dr. Louis Rosenberg has undertaken some very interesting research where he reviewed the biological foundation of swarm AI, and produced some examples of how humans can swarm by building a distributed Hive Mind3.

The best way to get a feel for the capabilities of the collective, is to take a look at this short video, where drones are programd to behave like a swarm – notice the way they navigate past an obstacle6.

Boids

One of the earliest examples of swarms appeared with the work of Craig Reynolds in 1986, a software simulation in which he was able to model the flocking behaviour of birds. Reynolds called this “Boids”7 and you can watch the original Boids simulation in this video8. The word “Boid” refers to a bird-like object – technically, Boids are examples of what have now became known as “Artificial Life”.

Intelligent Agents & Swarm Technology Intuitively, an intelligent agent is a goal-directed, adaptive entity, which has the ability to learn and reason in order to achieve its goals in an optimal manner4.

20 20


21


A few examples of biologically inspired AI in Cyber Security The idea of using swarm intelligence is not new. Prof. Fulp was undertaking this kind of research with Glenn Fink of Pacific Northwest National Laboratory, when they came up with the idea of copying ant behaviour9 “Swarm intelligence, the approach developed by PNNL and Wake Forest, divides up the process of searching for specific threats.” Derek Manky, global security strategist at Fortinet, recently predicted that10 “… cyber criminals will replace botnets — large numbers of infected internetconnected devices controlled by hackers — with intelligent clusters of compromised devices called “hivenets” to create more effective attack vectors. ‘Hivenets will leverage self-learning to effectively target vulnerable systems at an unprecedented scale’ ” Moreover, he stated that “They will be capable of talking to each other and taking action based off of local intelligence that is shared.” Historically security is often an afterthought, nevertheless, with the advent of Internet of Things and the possibility that IoT devices are likely to swarm in the near future, it is imperative that the designers and creators of such systems understand the consequences of being lax about security. According to UC Berkeley’s Swarm Lab, “The Swarm gives rise to the true emergence of concepts such as cyberphysical and cyber-biological systems, immersive computing, and augmented reality” Of course, one should also be cognizant of the converse scenario, namely security issues related to swarm applications, which are addressed in this article11. Nevertheless, the application of AI to the world of cyber security is not a futuristic notion – it is already a reality. One organization that has been particularly successful in this field is Darktrace12, their CEO, Nicole Eagan, had this to say in a recent interview13

“... this is going to be a full-on arms race, machines against machines, mathematical algorithms against mathematical algorithms” furthermore, consider the terminology used by Nicole Eagan to explain the manner in which their systems function14 “We talk a lot about the human immune system. We’ve found it’s a very effective analogy because boards of directors can understand it, non-technical people can understand it, as well as deep technical people. We’ve got skin, but occasionally that virus or bacteria is going to get inside.”

Conclusion The intention for this article was to introduce one of the most exciting areas of research known as Biologically inspired Artificial Intelligence and Swarm Intelligence in particular. Some interesting developments have taken place in the last couple of decades, which have the potential to be of immense benefit to humanity. For example, researchers have developed technology that allows humans to control drones using thought15 – one can easily imagine how this kind of technology will be of benefit to those who have a physical disability. As with any kind of very advanced technology, swarm intelligence has its dark side, when combined with drone technology it becomes a lethal combination. This combination is so troubling to distinguished AI researchers all over the globe, that they have produced a short fictional video about AI-powered weapons that makes “The Terminator” look like a Disney movie16, in an attempt to convey the dangers that such entities will pose for humanity. Stuart Russell, a Professor of Computer Science at the University of California, Berkeley has stated, “This short film is more than just speculation. It shows the results of integrating and miniaturizing technology that we already have.” Artificial Intelligence, is at the core of what Prof. Klaus Schwab, the Executive Chairman of the World Economic Forum, has termed “The 4th Industrial Revolution”, and it is unlike any other technology created by humanity, as such it is imperative that we treat this technology with a great deal of thought and care before releasing it into the wild.

22

References 1 “Swarm Intelligence in Cellular Robotic Systems”, Gerardo Beni & Jing Wang, https://link.springer.com/ chapter/10.1007%2F978-3-642-58069-7_38 2 “Understanding Termite Mounds”, http://www. withoutahitch.com.au/travel/understanding-termite- mounds/ 3 “What is Swarm AI?” https://youtu.be/xWSkbsIRNMg 4 “Artificial Intelligence: A Modern Approach, Pearson; 3 edition”, Stuart Russell & Peter Norvig 5 “Adaptation in Natural and Artificial Systems: An Introductory Analysis with Applications to Biology, Control, and Artificial Intelligence”, By John H. Holland, https://mitpress.mit.edu/books/adaptation-natural-and- artificial-systems 6 “A Swarm of Nano Quadrotors-Crazy must watch”, https://youtu.be/2la4pIyXOEQ?t=1m9s 7 “Flocks, herds and schools: A distributed behavioral model”, Craig Reynolds, 1987, SIGGRAPH ‘87: Proceedings of the 14th annual conference on Computer graphics and interactive techniques. Association for Computing Machinery: 25–34. doi:10.1145/37401.37406 8 “Craig Reynolds - Original 1986 Boids simulation”, https://youtu.be/86iQiV3-3IA 9 “Swarm Intelligence Could Transform Cyber Security”, http://www.govtech.com/dc/articles/Swarm-Intelligence- Could-Transform-Cyber-Security.html 10 “‘Swarm’ cyber attacks, crypto-currency stealing malware predicted for 2018”, https://www.smh.com.au/ technology/swarm-cyber-attacks-crypto-currency- stealing-malware-predicted-for-2018-20180108-p4yyaz. html 11 “Security Challenges in Swarm Intelligence”, https:// www.researchgate.net/publication/293176049_Security_ Challenges_In_Swarm_Intelligence 12 From The Olympics To A Unicorn: How ‘Cyber Immune System’ Darktrace Hit A $1.3BN Valuation via @ forbes https://www.forbes.com/sites/thomasbrewster/ 2018/05/15/darktrace-unicorn-one-billion- valuation/#48f1262420b7 13 Darktrace CEO Nicole Eagan talks autonomous response on CNBC Asia, https://vimeo.com/226874387 14 Firewalls Don’t Stop Hackers. AI Might. | Backchannel https://www.wired.com/story/firewalls-dont-stop- hackers-ai-might/?mbid=social_twitter_onsiteshare 15 “This cutting-edge technology lets you control multiple drones with your mind”, https://thenextweb. com/video/2016/07/19/wait-cutting-edge-technology- lets-control-multiple-drones-mind/ 16 “This fictional video about AI-powered weapons makes The Terminator look like a Disney film”, https:// thenextweb.com/artificial-intelligence/2017/11/14/ this-fictional-video-about-ai-powered-weapons-makes- the-terminator-look-like-a-disney-film/


1. ToS = Social Contract 2.0 User agreements, or Terms of Service (ToS) documents, are legal contracts between an online service provider, like Facebook, and its users. These agreements, which constitute the principal governing documents of online social interaction, spell out the expectations, responsibilities, and liabilities of both parties to the contract. In effect, the ToS for a social media platform, like Facebook or Twitter, is what political philosophers Thomas Hobbes and John Locke call “a social contract.” The ToS document, therefore, not only stipulates the terms and conditions of the commercial relationship but also articulates, structures, and regulates the opportunities, liabilities, and exposures that are involved with using the service.

During the US Senate’s grilling of Mark Zuckerberg on 10 April 2018, one Senator—Senator Kennedy of Louisiana— cut through the polite posturing and carefully parsed language: “Here’s what everybody’s been trying to tell you today, and I say this gently.Your user agreement sucks.” The statement raises a number of interesting and important questions: What is a user agreement? How and why do they suck? And, perhaps most importantly, what can we do about it?

2. Why ToS Suck Virtually all ToS documents suck and for a number of reasons. First, these agreements are designed to be topdown and authoritarian. In issuing a ToS the corporate service provider reserves for itself the exclusive right to dictate terms, and it does so, not surprisingly, in a way that tends to favor the sovereign organization and its interests.

David J. Gunkel Distinguished Teaching Professor Department of Communication at Northern Illinois University

23


This produces not an open public space, as some users have mistakenly assumed, but a closed, proprietary environment that is under the sole authority and regulation of a private corporation. Despite appearances, Facebook is a “company town” and when it comes to data security and personal privacy, the company can do with you and your data what it decides is best for its operations. Second, and following from this, users have little choice but to consent and submit to the authority and rule imposed by the sovereign power, or what we could call following Thomas Hobbes the “Digital Leviathan.” The ToS document is formulated as non-negotiable contract of adhesion, meaning that the ToS is offered on a “take-it-or-leave-it-basis” such that users can either accept the terms of the agreement as written or not at all. The standard ToS, therefore, typically provides for only one of two possible options: “I agree” or “Cancel.” Facebook, for its part, provides new users with only one green button labeled “Sign Up.” The fine print situated above the button, reads as follows: “By clicking Sign Up, you agree to our Terms, Data Policy and Cookies Policy.” New users, therefore, have only one of two options, click the button and agree to Facebook’s terms or leave. For this reason, the relationship between the two parties to the contract is deliberately asymmetrical and not open to negotiation. Third, these documents are often long, deliberately opaque, and confusing, producing what some political scientists have called an “obfuscatocracy.” Despite this (or perhaps because of it), users are obligated to consent to the stipulations listed in the document whether they have actually read and understood them or not. Available evidence—both anecdotal and empirical—suggest that users either do not read the agreements at all or quickly skim the text without necessarily recognizing or understanding the contractual stipulations contained therein. Consequently, much of what Zuckerberg told Congress about his organization’s collection and use of user data, the privacy protections individuals have (or do not have), and the various tools by which to control these things are all stipulated in Facebook’s user agreement. But, this was news to most of us, because most users never actually read the terms of the contract. We simply skipped over the complicated legalese and clicked on the big button labeled: Sign Up.

Fourth, ToS documents are a moving target. Buried in the fine print of most ToS’s is a stipulation that enables service providers to change the terms of the relationship at their discretion and often without due process of informing users. In other words, the sovereign organization typically grants to itself the sole authority to make changes to the governing documents and can do so at its pleasure with or (more often than not) without notification or review. Indicative of this is Blizzard Entertainment’s ToS for World of Warcraft: “Blizzard reserves the right, at its sole and absolute discretion, to change, modify, add to, supplement or delete, at any time, any of the Terms and Conditions of this Agreement, any feature of the Game or the Service, hours of availability, content, data, software or equipment needed to access the Game or the Service, effective with or without prior notice.” Facebook, for its part, has sought to be more open and transparent about modifications to its terms, but Zuckerberg and company still reserve for itself the sole right to initiate and execute modifications: “We may need to update these Terms from time to time to accurately reflect our services and practices. Unless otherwise required by law, we will notify you before we make changes to these Terms and give you an opportunity to review them before they go into effect. Once any updated Terms are in effect, you will be bound by them if you continue to use our Products.” Although this sounds more transparent, if users have not read the initial terms (including this stipulation about changes to the terms), then they probably will not be aware of, actively keeping track of and/ or involved in reviewing a change even if Facebook does provide a notification. For this reason, the rules of the game as stipulated in the ToS are not static and what had applied yesterday may no longer be in-play as of this morning. To make matters worse, there is no legal requirement (for now at least), that service providers archive previous versions of their ToS or document alterations over time. Apparently it the responsibility of the user to keep an eye on these things and to track the changes.

24

Facebook, for its part, provides new users with only one green button labeled “Sign Up.”


3. What Does this Mean, and What Can We Do About it?

but not very interesting in their own right or worth serious consideration. But as we have now learned in the wake of the recent crisis with Facebook, not reading the ToS is one sure fire way to stumble into a myriad of problems— problems for the user of the service but also problems for the service provider.

In the wake of Zuckerberg’s testimony before the US Congress and his revelations concerning Facebook’s ToS, the question we all must face is this: What now? I have three suggestions for how to move forward.

Third, being critical of a ToS or any other document governing operations in a social media platform or other online service does not mean that one simply opt-out or #DeleteFacebook. It would be naïve to expect that any social organization, whether in the so-called “real world” or in a virtual world will be able to get everything correct right from the beginning. Instead of opting out, we can and should actively engage these new social systems, capitalizing on their opportunities while remaining critical of the limitations of their existing social contract and advocating for improvements. What is needed, therefore, is not mere opposition and abstinence, but rather informed involvement and critical participation. This effort can and should be supported by appropriate federal and/or state regulations that recognize and can address the asymmetries of power that are already part and parcel of any ToS agreement. It is only by working together—users, service providers, and government—that we can begin to develop the terms of a Social Contract 2.0 that does not suck.

First, ToS’s are important and influential. These documents, which in the case of Facebook involve and apply to an estimated 2.2 billion users worldwide, represent a privatization of the political as individuals now form social affiliations under the sovereignty not of national governments located in geographically defined regions but multinational corporations that form and operate in excess of terrestrial boundaries. If declarations, constitutions, and national charters were the standard governing documents of the modern era, organizing and legitimizing the nation state as we know it, then it is the ToS that occupies a similar position in the postmodern era, articulating the foundation of social and political affiliations for a postnation-state, multinational polity. Second, we can no longer ignore or avoid reading the ToS. Despite the fact that these governing documents prescribe and regulate the rights and responsibilities of both providers and users, dictating the terms and conditions of online social interaction and affiliation, many of us—even those who would self-identify as politically active and socially aware— either ignore these texts as unimportant or dismiss them as a kind of “legalese” necessary to obtain access to services

For more information, see David J. Gunkel, Gaming the System: Deconstructing Video Games, Game Studies and Virtual Worlds. Indiana University Press, 2018.

25


Dermot Turing

In considering the functions of the mind or the brain we find certain operations which we can explain in purely mechanical terms.This we say does not correspond to the real mind: it is a sort of skin which we must strip off if we are to find the real mind. But then in what remains we find a further skin to be stripped off, and so on. Proceeding in this way do we ever come to the ‘real’ mind, or do we eventually come to the skin which has nothing in it? In the latter case the whole mind is mechanical. Alan M.Turing Computing Machinery and Intelligence (1950) 59 Mind 433-460

26


While waiting for the seemingly endless machinations of the post-war civil service engineers, Alan Turing turned his mind to the potential for computing machinery to modify its behaviour as a result of inputs given by the operator.

27


After leaving Bletchley Park at the end of World War 2, Alan Turing was hired by the National Physical Laboratory to design, and then programme, Britain’s national electronic computing machine. The design was done in a few months, then he turned to writing programmes for the as yet unbuilt machine. Getting it built took until late 1958 – thirteen years – by when, of course, many of its features were out-of-date.

Let’s agree that the humanoid robots are not going to take over the universe, at least not just yet. For one thing, to acquire malevolent intentions (like those of the Cybermen in the 1970s series of Doctor Who) someone would have to program them to malevolence, and also give them the ability to replicate, in other words control of the manufacturing processes together with the objective of self-replication. It’s not inconceivable, but it does seem fanciful.

While waiting for the seemingly endless machinations of the post-war civil service engineers, Alan Turing turned his mind to the potential for computing machinery to modify its behaviour as a result of inputs given by the operator. In a paper which was withheld by the NPL and not published until 1968, fourteen years after Turing’s death, Turing developed an idea of artificial neural networks which could be trained. What this work signified was programming which could modify itself, or learn, according to the ‘stimulus’ provided by the operator, ‘correcting’ the behaviour of the program. To put it differently, Alan Turing was talking and writing about the first steps in Artificial Intelligence.

However, there is something which the learned professors may be ignoring. Control and containment of artificial intelligence is not an established or popular academic discipline. The research dollars go into learning systems, and especially into the bizarre headline-grabbing idea of self-driving cars. So who is watching the growth and development of autonomy in artificial systems? Consider the following: • Computer programmes which self-replicate and distribute themselves all over the network have existed for decades. Ask anyone who has suffered a virus, a distributed denial-ofservice attack, a zero-day event or a ransomware problem.

Famously this work spilled out into the public domain, rather to the dismay of the late-1940s academic community, when the existence of the Manchester University ‘electronic brain’ received press coverage in 1949. That led to a spirited debate, lasting well into the next few years, in newspapers and on the radio, about whether machines could or would be able to ‘think’. Somehow, that question has never really gone away. As learning algorithms get smarter, we tend to take them for granted as soon as we have become accustomed to them, and their behaviours are assimilated into the boring body of mechanical stuff that machines can do, whereas the really difficult business of thinking is left to animate, biological organisms with hugely complex brains – such as humans.

• Computers are not single entities. They are all connected, over something called the internet, in a single, distributed, un-located family. The internet was designed specifically to cope with and heal outages in specific nodes (computers). The locus of an autonomous system is not ‘a server somewhere’ but hyperspace. It’s better to think of a hyperintelligent AI as behaving like an ant colony, where the organism is the colony (the population of computers) rather than any individual insect (item of hardware or indeed any item of software). • Computer programmes have been designed which have demonstrated both autonomy and imagination (creativity). Programs in the AI space are designed to have evolutionary capability.

So perhaps it is not that surprising that professors working on AI research are very dismissive of the popular fears of robots taking over the world. After all, they know that they are working on something small and specialised, like voice recognition, or translation engines, or pattern-spotting to identify terrorist activities, and it is almost unimaginable that such limited-scope learning could turn itself into something more terrifying.

If you put all those things together, you have the potential for an ‘escaped system’. In fact, it is difficult to see how any super-intelligent system could not already, ex hypothesi or ab initio, be distributed, self-replicating, unlocalised, autonomous and creative, which is to say beyond the ability of those thinking humans to control and contain. Perhaps, then, escaped systems are an inevitable side-effect of AI research; control and containment are not being widely studied because it is the AI equivalent of researching the Flat Earth hypothesis. To prepare a research proposal to investigate the factually impossible is rather hard.

What people continue to fear are self-aware autonomous systems, which have developed their learning and creative abilities to a point where they are not only better than humans at that mysterious business of ‘thinking’, but (probably as a direct consequence) have become adept at evading any controls and boundaries that the silly humans tried to put in place to contain the system. Then the system could take over, and that would spell out the end of life as we know it. Sometimes the dystopian nightmare involves the autonomous system taking the form of a ‘robot’, which (despite the fact that plenty of research dollars go into making cutesy robots which have humanoid shapes) can wander round the country in B-movie style snatching up starlets and generally causing havoc. Perhaps the professors are right to pooh-pooh this kind of throwback nonsense.

Let us then assume that it’s not possible to prevent an escaped super-intelligent artificial system from happening. What we would still like to know is when such a system has come into existence – perhaps we should not say ‘has been created’ or suggest that it’s a ‘programme’ which was ‘written’, as the process is likely to be at least semiautonomous or evolutionary – and we would like to know (unless we are hiding behind the sofa) whether the system is going to be hostile to us.

28


First, take the problem of recognition. This sounds like a problem in physics or economics: identify a variable and measure it. In the context of an autonomous self-directed system, that is, however, far from easy. It seems difficult to disconnect the idea of an artificial super-intelligence from the ideas of self-awareness, consciousness, or self-directed goal-setting having nothing to do with parameters set by the designers of an earlier generation of the evolved system. Here, we are into a problem which is at the very heart of science: ‘consciousness’ is more of a philosophical than a measurable concept. Which is also the problem at the centre of knowing the intentions of the system: understanding the ethics, desires, self-restraints of an autonomous superintelligent system are all aspects of ‘thinking’.

After leaving Bletchley Park at the end of World War 2, Alan Turing was hired by the National Physical Laboratory to design, and then programme, Britain’s national electronic computing machine.

So we have come back, via the future, to where all this started in 1949. Alan Turing’s paper on ‘Computing Machinery and Intelligence’, in which he explored the objections to machinery manifesting the observable aspects of ‘thinking’, still bears up rather well after almost 70 years. Dermot Turing is a lawyer and writer on historical topics including World War 2 code-breaking. His latest book, The Story of Computing, was published by Arcturus in July 2018, and his biography of his uncle Alan Turing (Prof – Alan Turing decoded) was published by the History Press in 2015.

29


Ian Bryant

IT IS PERHAPS INTERESTING TO REFLECT THAT IN THE SHORT THREE WORD TITLE OF THIS ARTICLE, ONLY THE MOST MODERN OF THEM, CROWDSOURCING (WHICH IS COMMONLY AGREED TO BE A 2006 NEOLOGISM), HAS A CONSENSUS DEFINITION. If we look into the other terminology, the word Cyber is widely used, but has far less consensus as to meaning. Its roots are clear – it originates from the ancient Greek noun kubernētēs (κυβερνᾶν), ‘steersman’, itself derived from the verb kubernan, ‘to steer’. And its first contextual use was clearly as the term “cybernetics”, arising from an eponymous book “Cybernetics: Or Control and Communication in the Animal and the Machine” (1948) by MIT Professor of Mathematics Norbert Wiener

The working definition for Cybersecurity currently supported by the British Standards Institution (BSI), the UK National Standards Body (NSB) is “Cybersecurity takes the organisationally focussed approaches of Information Security and extends them to the consideration of inward and outward dependencies upon the environment, including people, processes and technologies”. As to Trust, the lack of consensus arises both from a profound divergence as to whether this is a subjective or objective characteristic, and then within these delineations, whether or not it is a binary state (trusted or untrusted), or some form of spectrum. And with the latter, granular form of measuring trust comes the challenge that humans are highly susceptible to “Biases in Estimating Probabilities”, as illustrated in Figure 1 from Richards Heuer’s work for the CIA “Psychology of Intelligence Analysis” (1999)

Yet in its use as a prefix – for instance in the context of Security – the consensus starts to disappear. The compound noun does not even have a consensus spelling, with three versions being widely encountered: Cybersecurity, Cyber-security and Cyber Security. And the precise definition of the concept is even less well agreed, with the majority of the major international Standards Development Organisations (SDO) – most of which I happen to sit on as the UK representative – such as ISO/IEC, CEN and CENELEC currently being unable to agree such a definition. 30


Figure 1

So we find that Cyber Trust is a very difficult thing to measure, which is further compounded by the fact most cyber systems are complex and coupled, yet in 1984 in his book “Normal Accidents”, Charles Perrow codified the Normal Accident Theory that “No complex, tightly coupled systems, humans design, build and run can be perfect”.

A well known approach to Crowdsourcing Trust-related information is Reputation Systems (defined by Audun Josang in 2000 as a way to share information provided by others as to the trust in products or services they cannot themselves try) may provide a way ahead, in three contexts: • To gauge the Trust that should be afforded to those submitting information to a Reputation System • To gauge the Trust that should be afforded to a product or service • To gauge the Trust that should be afforded to collaborative partners’ use of products or services

We have to accept that even in well engineered approaches to formally reviewing the Security of Cyber Technology, such as the international Common Criteria [for Information Technology Security Evaluation], the “level of measurement” achieved is Ordinal not Absolute. And that is without factoring in the “people, processes and technologies” that BSI believes are necessary to understand Cybersecurity, which means that differing instantiations of the same Cyber Technology will have differing levels of Trust, due to the way in which they are used.

The latter concept was explored in the European Commission sponsored “Messaging Standard for Sharing Security Information” (MS3i) and “National & European Information Sharing & Alerting System” (NEISAS) projects , which established the Information Recipient Trust Metric (IRTM), which used a Weibull function to provide a weighted (law of diminishing returns) measure of sets containing multiple reputation values, with the presumption that larger information sets tend to have the beneficial property of statistical convergence, diluting the effects of any extreme “outlier” views.

In terms of what is actually being measured about Cyber Trust, it is perhaps useful to invoke English philosopher Jeremy Bentham, who in 1776 in his “A Fragment on Government” defined utility as a ‘the greatest happiness of the greatest number that is the measure of right and wrong”.

In all these context, we have used the neutral term “gauge”, which can be more than a binary or spectral measure of preference, and include distinction attributes which can themselves be measured.

So in this view of Cyber Trust, a Cyber Technology which is most widely regarded as being beneficial is that which is of most utility, which leads to the option that Crowdsourcing may be a way to improve understanding as to how much Trust can reasonably be afforded to Cyber Technology, or in a partner using Cyber Technology.

31


Such attributes can include, as a backdrop to trust in any product and service, the degree to which the Supplier is trusted, and properties that could be determined by open literature review would include: • Country of Operation • Country of Ultimate Ownership • Current Financial Viability • Evidence of Security Governance (e.g. ISO/IEC 27xxx certification) • Evidence of Standards Use • Evidence of Development Environment quality • Evidence of willingness to use of External Verifications

But there are clearly some middle grounds, for instance: • Grey-list: In use by Trusted Parties, who have note encountered any need to Blacklist, but no formal Review has been performed • Brown-list: Not in use by Trusted Parties, but no issues disclosed that need to Blacklist, but no formal Review has been performed Clearly there would be scope to further subdivide within each category, such as the Rigour used in Review (White-list) and the number and trustworthiness of contributors (Grey-list and Black-list), the latter clearly being an example of where the IRTM would be utilised.

Moving on to the trustworthiness of products and services themselves, in addition to the attributes inferred from the supplier, properties that could be determined by open literature review which may be beneficially collected and collated through a Reputation System would include: • The Provenance of the product or service (e.g. Open Source or Proprietary) • The Platform Dependency(s) • Other External Dependency • Whether Secure Configuration information is provided • The availability of Documentation • Whether Independent Assurance has been performed • A Vulnerability Review (VR) to open and resolved Common Vulnerability Enumeration (CVE) items • The track record of Flaw Remediation (FLR) – how long resolving CVEs has historically taken

The final consideration for a Reputation System would be to include some form of “Post Marketing Surveillance” (PMS), which is an approach well proven in the science of Pharmacovigilance, where is accepted that drugs and medical devices are typically endorsed on the basis of a relatively small set of test cases. In Pharmacovigilance, a formal feedback system is established – in the UK, the Yellow Sheets in the back of the British National Forumlary (BNF) – which allows ongoing reporting of issues, and dynamic highlighting of potential trustworthiness concerns by data mining. So, in conclusion, although one may not have direct evidence of the Trust in Cyber Technology, or in a partner using Cyber Technology, by “asking a friend” there is clearly a way to improve understanding as to how much Trust can reasonably be afforded.

Accepting the earlier caveats as to biases when it comes to judging attributes, in the context of providing the final assessment of the trustworthiness of products and services within the realm of Cybersecurity, we can start from a binary view: • White-list: Reviewed and endorsed by Trusted Party • Black-list: Substantive rationale against provided from Trusted Party

“Cybersecurity takes the organisationally focussed approaches of Information Security and extends them to the consideration of inward and outward dependencies upon the environment, including people, processes and technologies”.

32


Noel K. Hannan BSc MSc CISSP VR Principal Consultant at C3IA Solutions

33


Noel Hannan BSc MSc CISSP VR, Principal Consultant at C3IA Solutions, imagines an exchange between Babbage and Darwin on the effect computer science may have on the evolution of humanity, and moves into discussion on the perceptions of AI.

EXAMINING OURSELVES IN A BLACK MIRROR Artificial intelligence of course has been a science fiction mainstay for much of the 20th century, and remains so as we start to close out the first quarter of the 21st (where did that go!). There are direct lines of correlation and inspiration from Maria in the seminal Metropolis to the AIs of Charlie Brooker’s superb Black Mirror. Good stuff rarely happens in Black Mirror episodes. The technologies are amazing until there is some malicious human input – jealous husbands obsessing over past partners, vengeful techies creating sadistic worlds for custom AIs. Many episodes have also posed the question which underpins much of Philip K. Dick’s work (unfortunately much less well served by the recent Electric Dreams), that is, what does it really mean to be human? Perhaps the Turing Test has had its day – Turing never specified the base intelligence level of the human who needed to be fooled, and many people converse daily with sophisticated bots without ever suspecting. A more interesting development may well be a Reverse Turing test, creating an AI which itself is fooled into thinking it is human (or can just be generally fooled – more of that later), or at least an equivalent in its own consciousness and in the eyes of the law. The recent Atlantic Council/RUSI initiative The Art of Future Warfare looked at the introduction of AI into the battlefield and how that would affect human experience, including judicial proceedings. The results were messy and disturbing.

Babbage and Darwin, Dinner at Eight, Don’t Be Late. We have something important to discuss… It is a well-documented fact that, during their professional lives, Charles Darwin and Charles Babbage were frequent dinner party companions. The father of modern evolutionary theory and the father of modern computing would have been a priceless pairing and it would be a rare modern scientist of any discipline who would not have wanted to have been a fly on the wall – or better still, another smartly attired Victorian guest – and have an intimate recording of those conversations. FORGOTTEN AND FORBIDDEN TEXTS Imagine, if you will, that Darwin’s Origin of the Species was the topic of conversation and that Babbage, intrigued by this iconoclastic publication, had proposed to Darwin that the next step on man’s evolutionary ladder would be driven by his own Difference Engine, and that man would one day create computing intelligences greater than his own, which could either be harnessed to create a symbiotic whole dedicated to the advancement of humanity, or would inevitably replace us at the top of the evolutionary chain. Darwin, intrigued by Babbage’s proposal and searching in his latter years for a final ground-breaking research project, pens The Future of the Species, in which he posits the end of humanity through enslavement and eventual extinction by malevolent artificial intelligences. The book is a lost manuscript following Darwin’s death, and surfaces only as an urban myth and conspiracy theory, a forbidden tome suppressed by Babbage, then Turing, and finally the tech giants of modernity…

DISRUPT, DEFEND AND DECEIVE From a security perspective, artificial intelligence is but one of several technologies which have the ability to severely disrupt what we accept as the norm. Quantum computing will render standard encryption next to pointless until quantum encryption – and quantum-resilient encryption – becomes commonplace. 3D printing may create a counterfeiting revolution, including the creation of lower receivers for assault rifles which could bypass any restrictions on gun ownership – the CAD files for such things are already widely available online. Augmented and virtual reality, driverless cars and drones, many of these disruptive technologies will have a reliance on artificial intelligence in order to operate effectively and, most importantly, safely. Once decision making is handed over, for example in an emergency situation for a driverless car where some loss of life is inevitable, we may have absolved ourselves from the ethical process and left it to an algorithm. This is going to be some really, really difficult stuff to reduce to code. And defend in a court of law.

LET’S THINK ABOUT GETTING OUR HOUSE IN ORDER Of course, this is imaginary stuff and perhaps a rip-roaring Victorian scientifiction romance yet to be penned (watch this space), but there is much to be gained from the examination of disparate and unusual sources as we struggle to make sense of the second age of technology we are entering at speed, where we will (if our industry media is to be believed) compete with artificial intelligences for jobs, allow them to drive our cars, and perhaps even enter our bodies. Influential individuals such as Professor Stephen Hawking counsel caution on this journey – perhaps, in the aftermath of Wannacry, we need to examine how poorly we manage ‘dumb’ systems on an enterprise level before we augment, virtualise and enhance our reality, and hand over ultimate control to systems which will still need to be powered, maintained, managed and developed – technological advances are unlikely to provide an immediate change in the basic laws of physics. 34


HOW TO FOOL A GENIUS Will artificial intelligence change security fundamentally? That is entirely possible. How will an attacker change tactics to cope? They may employ their own AIs – Russia and China in particular have extremely advanced AI programs in flight. But the real response may be even simpler – socially engineering the AI to fool its authentication routines to recognise the attacker as a legitimate individual and in this manner bypass security protocols or punitive response. If a human – of any IQ level, intelligence or advanced situational awareness – can be fooled, then why not an AI? It may well be that, as with the current predominance of social engineering techniques (with phishing as the most common form) designed to bypass technical controls, the attack vector which the AI is most vulnerable from is some form of algorithmic psychology. THE DINNER PARTY – THE AFTER PARTY So what would Babbage and Darwin make of all this? Would Darwin recognise that the industrial revolution led naturally to the information age, and that his own work had perhaps accelerated the divergence from religious-based knowledge to the hard agnostic sciences? Would Babbage marvel at Alexa and DeepMind and shudder with pride at the way in which his creations had developed and driven humanity forward, and laugh at our fears of creating the means of our own destruction? Alas, we will never know the answers to these questions. But we do know that over a glass of good port in a Victorian drawing room, two extraordinary men discussed matters fit for gods, and that their legacy, 136 years after Darwin passed away, he continues to influence our societies, science and human progress.

136 years after Darwin passed away, he continues to influence our societies, science and human progress. 35


Nathan Kline and Manfred Clynes birthed Homo Sapiens V2.0 in a paper on “Drugs, Space, and Cybernetics: Evolution to Cyborgs�1 to minds of the military industrial complex assembled at Brooks Air Force Base for the USAF School of Aerospace Medicine conference on the psychophysiological aspects of space flight. Their mission in May 1960 was the miracle of a man in space2 to reverse the catastrophe of the Soviet victory with Sputnik in 1957. Colin Williams SBL

36


Cyborg was the human exogenically modified to negate vulnerability to the lethality of space. If the problem was no oxygen; remove the need to breathe. Cyborg “proposed that man should use his creative intelligence to adapt himself to the space conditions he seeks rather than take as much of the earth environment with him as possible”.

The Humanity+ group states that they are an “international nonprofit membership organization”, promoting “the ethical use of technology, such as artificial intelligence, to expand human capacities”3. It publicly advocates transhumanism in general, and radical life extension in particular. Humanity+ has “theoretical interests” that “focus on posthuman topics of the singularity, extinction risk, and mind uploading (whole brain emulation and substrate-independent minds)”4.

Whilst the cyborg was a means to a limited end, this goal was far from the end or limit of its Fathers’ cosmic dreams. They conceived the cyborg as the product of “participant evolution”. This was the start of the conquest of nature by science.

Nick Bostrom is a founding member of Humanity+, and its direct precursor the World Transhumanist Association. Professor Bostrom is the inaugural director of the Future of Humanity Institute in the Faculty of Philosophy at the University of Oxford. He has been called to give evidence to a House of Lords Select Committee.

The augmentation of the human organism by “means of cybernetic techniques”, and the advancement to cyborg was expedient because the improvements required for space travel “cannot be conveniently supplied for us by evolution; they have to be created by man himself”. Through science and technology humans will compensate for the delinquency of nature. With “suitable biochemical, physiological, and electronic modification of man’s existing modus vivendi”, it will be possible to change “bodily functions to suit different environments” at will, and without “alteration of heredity”.

The Morman Transhumanist Association asserts that it is “the world’s largest advocacy network for ethical use of technology and religion to expand human abilities, as outlined in the Transhumanist Declaration and the Mormon Transhumanist Affirmation”5. The Turing Church proclaims itself “a community of seekers at the intersections of science and religion, spirituality and technology, engineering and science fiction, mind and matter, with shared cosmic visions”. They offer a “new cosmic, transhumanist religion”, predicting that they “will go to the stars and find Gods, build Gods, become Gods, and resurrect the dead from the past with advanced science, space-time engineering and “time magic”6.

Sending a man into space, or to the moon, was a necessity compelled by the imperatives of the Cold War. The “challenge of space travel to mankind” demanded the development of its “technological prowess” well beyond their current limits. However, dissatisfied with merely re-shaping humans, overthrowing the tyranny of biology, and achieving mechanistic mastery of evolution; Kline and Clynes had their minds on a greater prize. They were driven by a higher purpose.

The UK Transhumanist Party heralds “a new political organisation in the UK, part of a network of similar groups around the world, committed to positive social change through technology”. And, to “the idea that we must improve ourselves and society using the most effective tools available – to go beyond what we have been, in order to overcome the world’s problems and create a better future”7.

Space travel called humankind to the “spiritual challenge to take an active part in his own biological evolution”. Because, beyond the ephemeral and earthly vicissitudes of the Cold War, “existence in space may provide a new, larger dimension for man’s spirit.”

Zoltan Istvan founded the American Transhumanist Party in 2014. Article 3 of its “Transhumanist Bill of Rights” requires that “human beings, sentient artificial intelligences, cyborgs, and other advanced sapient life forms agree to uphold morphological freedom—the right to do with one’s physical attributes or intelligence (dead, alive, conscious, or unconscious) whatever one wants so long as it doesn’t hurt anyone else.”8

Cyborg was the transcendent transformation of humans and humanity. Release from the shackles of genetics. Selfemancipation of the self-evolved, (self-professed) superior, being from the confines of the caged cradle of infancy. The manufactured transmogrification of human to cyborg by the subjugation of evolution to human will was a means to a great end. Cosmic immortality.

Istvan stood against Donald Trump in 2016, running his campaign from his Immortality Bus. His campaign slogan: “Death is not Destiny”. He will seek election as Governor of California in 2018. Istvan’s website carries a quote attributed to the BBC describing him as “the physical embodiment of the Californian, libertarian, start up culture tech-utopian dream.”9

The 1960’s conceptualisation of cyborg is rooted within the central body of thought we now know as transhumanism. Within the broad spectrum of contemporary transhumanist thought and behaviours, there are multiple transhumanist bodies. There is an established transhumanist movement. It is international, and it is growing. It has become unwise to dismiss transhumanism as marginal, fringe, eccentric or irrelevant.

The Singularity University was founded in 2008 by Peter Diamandis and Ray Kurzweil. Diamandis leads a number of space exploration and space tourism companies; including Planetary Resources. An enterprise devoted to developing and using the technology for asteroid mining. Kurzweil is the populariser of the idea of the technological singularity, a prominent advocate of transhumanism and, since 2012, a “Director of Engineering” at Google.

37


Google and Nokia were founding sponsors of the Singularity University. This de facto university of transhumanism is hosted at the NASA Research Park in San Jose California, alongside the NASA Ames Research Center. As the Internet meme has it: “NASA + Google = Singularity University”. NASA was midwife to the cyborg. Alphabet owns Google. Alphabet owns DeepMind and Calico (California Life Company) a developer of biotech for radical life extension.

air lock. Tsiolkovsky’s Equation was the first application of calculus to the practical task of firing a rocket into outer space. The Soviet space programme was built on Tsiolkovsky’s thinking. As perhaps was NASA. Searching the site of the V2 rocket programme in May 1945, the Red Army found a German translation of a book by Tsiolkovsky with Wernher von Braun’s handwritten notes on nearly all the pages.

Three miles separate the Googolplex and the Singularity University. LinkedIn HQ likewise. Eight miles for One Infinity Loop (Apple) and One Hacker Way (Facebook).

In September 1960, via Operation Paperclip, von Braun became the director of NASA’s Marshall Space Flight Center and led the creation of the Saturn family of multi stage rockets. In 1969 Apollo 11 carried Aldrin and Armstrong to the moon in a Saturn V rocket.

The California Istvan would govern is the heartland of modern technological transcendent transhumanism. Imagine if a significant proportion of the controlling minds of some of the largest, richest, and most powerful companies on Earth were either transhumanist true believers, or strongly sympathetic fellow travellers. Imagine if their commercial endeavours were servants of the transhumanist cause. Mere hypothetical speculation; obviously.

It is because of Fyodorov that where the USA had astronauts, the USSR had cosmonauts. In the USSR, cosmism coupled with communism. Bolshevism would emancipate the universe. In NASA, cosmism melded with Manifest Destiny. Columbia would take civilization to the stars. Space became the final frontier. It always was both Star Trek and Star Wars.

The roots of modern transhumanism run deep and strong. For over a century they have fed on the most primal, most public and most private of all of our fears; death. Their fruit tastes sweet to rationalists and mystics alike, and offers the greatest of all prizes; an immortality of infinite and vital youth. There is validation here for believers in humans made by gods and for minds that conceive gods as made in their image. Salvation through science; transcendence through technology.

In 1931, the pioneering evolutionary theorists, John Burdon Haldane and Julian Huxley joined a delegation of sixteen British scientists to a USSR where Tsiolkovsky’s ideas were becoming increasingly influential. The two had previously co-authored a book on “Animal Biology” in 1927. Haldane originated the use of the word clone to describe the process of replication of genetically identical humans.

Clynes and Kline launched the cyborg into the common sense of the intellectual elite of the American military industrial complex. In its rocket stream came one of the load bearing supports of modern transhumanism; cosmism.

At the Cambridge University Heretics Society gathering on the 4th February 1923, Haldane delivered a paper on “Daedalus; or, Science and the Future”. His argument was that science was the salvation of humanity, the end of death and the banishment of evil.

The seed of cosmism was planted in the late nineteenth century by the Russian Orthodox Christian mystic Nikolai Fyodorovich Fyodorov. In Fyodorov’s cosmology, humans were created by God in order to achieve the conquest of death through technology. This he called the Common Task.

Science snapped the chains of religion, allowing the “free activity of man’s divine faculties of reason and imagination”. It eradicated material want and inequality, giving us “the answer of the few to the demands of the many for wealth, comfort and victory”. In science, “man” is to be redeemed by the “gradual conquest, first of space and time, then of matter as such, then of his own body and those of other living beings, and finally the subjugation of the dark and evil elements in his own soul”. Published in 1924, Haldane’s lecture elicited a Jovian response from Bertrand Russell.

This divine mission encompassed both immortality of the currently living, and the resurrection from death of every human that has ever lived. Because the physical forms of the dead had disintegrated and dispersed their matter across the universe, humanity must now take to the stars to recover and reanimate it. These resurrected humans would be modified to assume new forms better adapted to life in space where humanity would find the infinite material resources needed to pursue the Common Task.

In April 1951 Julian Huxley delivered the third series of the William Alanson White Memorial Lectures. In his first talk, he told his Washington DC audience that “the idea of humanity attempting to overcome its limitations and to arrive at fuller fruition” through the application of science “might perhaps be called, not Humanism, because that has certain unsatisfactory connotations, but Transhumanism”.

Through the Common Task, humans would bring structure and meaning to the infinity of God’s creation. God’s chaotic universe remade as the harmonious cosmos by Mankind.

With the 1957 publication of “New Bottles for New Wine” Huxley broadcast his new philosophy. In the opening essay, “Transhumanism”, he reflects that as “we need a name for this new belief”, then “perhaps transhumanism will serve; man remaining man, but transcending himself, by realizing the new possibilities of and for his human nature”.

Fyodorov was admired by Tolstoy and Dostoyevsky. And, by Konstantin Eduardovich Tsiolkovsky. An early theorist of rocket science and space travel. It was possibly Tsiolkovsky who first conceived of a multi stage rocket, first imagined a space station, theorised a space elevator, and designed the 38


Imagine if a significant proportion of the controlling minds of some of the largest, richest, and most powerful companies on Earth were either transhumanist true believers, or strongly sympathetic fellow travellers.

39


For Huxley, Transhumanism was both possible and necessary: “the human species can, if it wishes, transcend itself — not just sporadically, an individual here in one way, an individual there in another way, but in its entirety, as humanity”.

Huxley was a founder of the British Eugenics Society. Long before the Third Reich’s enthusiastic embrace, eugenics was a bedrock of British socialism and a foundation of American public health policy.

To attain transcendence, a movement was required. To join the movement was to join with destiny. Huxley declared: ““I believe in transhumanism”: once there are enough people who can truly say that, the human species will be on the threshold of a new kind of existence, as different from ours as ours is from that of Peking man. It will at last be consciously fulfilling its real destiny”.

For George Bernard Shaw “the only fundamental and possible socialism is the socialisation of the selective breeding of Man”10. For Francis Galton, the inventor of the word, “what nature does blindly, slowly, and ruthlessly, man may do providently, quickly, and kindly.” Because, “as it lies within his power, so it becomes his duty to work in that direction”11.

In 1986 Max Moore joined the Alcor Life Extension Foundation seeking cryonic preservation in the hope of technological resurrection. Ray Kurtzweil is an Alcor member. In his 1990 essay on “Transhumanism: Toward a Futurist Philosophy”, Moore blended cosmism with Haldane and Huxley. And then infused this synthesis with the silicon of the valley in which he now lives.

Ruling in Buck v Bell in 1927, Supreme Court Justice Oliver Wendell Holmes opined that “three generations of imbeciles are enough”. Thus, it became legal to conduct forced human sterilisation anywhere in the United States. In the Court’s judgement, the interests of public health required protection from the degenerate progeny of the “feeble minded”. Evidently, “it is better for all the world, if instead of waiting to execute degenerate offspring for crime, or to let them starve for their imbecility, society can prevent those who are manifestly unfit from continuing their kind”.

Transhumanism is at once beguilingly beautiful and terrifyingly grotesque. It lays the infinite wonders of the cosmos out before us as ours by destiny. It prophesies limitless physical and mental powers. In its light sparkles the promise of an end to disease, disability and death. In its shadows lurk terrorism, squalor, deprivation, inequality and war.

Between 1907 and 1963 there were around 63,000 legal, forcible, eugenic sterilisations across the USA. Of which, around 20,000 were in California. To its true believers, Transhumanism may become as much of a “duty” as it was for Galton and the US Supreme Court.

Transhumanism necessarily requires more than the transcendence of the physical and mental limits of the human condition. It demands the corporeal and philosophical deconstruction of human and humanity as a precondition for its reconstruction as transhuman. This inevitably places Transhumanism in ultimately terminal conflict with humanity and Humanism.

Transhumanism is a ‘movement’. It has become an ‘ism’. Will it seek, as others have done, to monopolise its truths and deny the legitimacy of their use by others? Why must adherence to Transhumanist ideology be the exclusive ‘authorised’ means of augmentation and enhancement? What if I wish to be a Humanist cyborg? What if I wish to be neither a Humanist nor a Transhumanist and yet still seek augmentation on my terms and not in pursuit of any ideology or religion, secular or theist?

To attain transcendence, Transhumanism must defeat Humanism. As human is insufficient, so must Humanism be. If Humanism insists on the preservation of humanity and the protection of human rights at the cost of transcendence, then Humanism stands against the greater right of sentient life to evolve.

Transhumanism is not yet a mainstream political force. Neither has it remained a marginalised, fringe and abstract philosophy. Given the defiance of ‘political gravity’ in the 2016 presidential election, the established order of things no longer looks so established.

If Transhumanism is the only way our species will survive the Singularity then obdurate, irrational, Humanism is species suicide. Humanism violates the survival imperative.

If the post Millennial generations were to come to seek an ‘ism’ of their own to replace those of the nineteenth and twentieth centuries, then Transhumanism might suit them well. Perhaps our ignorance of Transhumanism is exactly as significant as the comparable ignorance in 1848 as Marx published the “Manifesto of the Communist Party”?

Humans have ‘inalienable’ and ‘fundamental’ ‘rights’ because of Humanity. More than collective noun, Humanity expresses the essence of our being. The worst of all crimes are those committed against our ‘humanity’. This is the fundamental predicate of our law of armed conflict, and of our international humanitarian law.

Which side will your children, or theirs, fight on in the First Transhumanist War?

For Transhumanism, human is the inferior form; the transhuman the superior being. The right of the human to being must not be allowed to obstruct the transcendent emergence of this superior transhuman being.

That led by the cyborgs and scientists? Condemning the superstition and anti-life dogma of religion. Offering immortality through technology in the name of progress. Or that led by the righteous and faithful? Denouncing the arrogant hubris of the sinful scientist. Demanding destruction of the damned cyborg. Proclaiming eternal afterlife through death to the godless Transhumanist heretics.

The advancement of the human condition through technology must be a ‘good’ thing. But, who defines ‘improvement’? Will improvements be available to all, or will they be the preserve of the elites? Would Transhumanism condemn humans to squalid, sterile, mortality by elevating transhumans to immortality? 40


Is Transhumanism the pathway to the next phase of evolution? Is it the only hope that humanity has of surviving the Technological Singularity? If the answers are yes, then its true believers are on the side of destiny, truth and of life itself. History offers legions of humans slain and oceans of human blood spilt by humans in the prosecution of their truths. Transhumanism would not be the first cause for which our species has slaughtered itself in the name of truth, salvation and immortality.

References 1 All quotes attributed to Kline and Clynes are taken from the original paper published in the conference proceedings by Columbia University Press in 1961. 2 On 12th April 1961, cosmonaut Yuri Alekseyevich Gagarin became the first human to orbit the earth. This sent astronauts Neil Armstrong and Edwin “Buzz” Aldrin to the moon. 3 https://humanityplus.org/ 4 https://humanityplus.org/philosophy/ 5 https://transfigurism.org/about 6 https://turingchurch.net/about-turing-church-ac6ebf2e97b6 7 http://www.transhumanistparty.org.uk/ 8 http://www.zoltanistvan.com/TranshumanistParty.html 9 http://www.zoltanistvan.com/ 10 “Man and Superman” 11 Lecture, Sociological Society, London University, 1904.

41


Dr. Daniel G. Dresner FInstISP & Colin Williams SBL

With grateful acknowledgements to: Professor Emma Barrett, Carmel Dickinson, Geoff E, Professor Chris Hankin, Gwern Hywel, Nigel Jones, Dr Keith Scott, Dr Robin Preece, Professor Chris Taylor, Matthew Thomas, and a special thank you to Sir Dermot Turing.

Figure 1: Some of the ‘original’ Ratio Club and its bouquet of disciplines. Image Phil Husbands, University of Sussex

OF DINNER, WISDOM, AND A NURSES’ HOME

WHAT WAS THE RATIO CLUB? The Ratio Club was the locus of the British manifestation of the cybernetics movement. It was both counterpoint and complement to America’s contemporaneous Macy conferences, and to Norbert Wiener’s seminal ‘Cybernetics’ published in 1948.

On a mostly unremarkable September 1949 day, in a land still deeply scared by war, an eclectic constellation of brilliant minds coalesced in the basement of the National Hospital for Nervous Diseases on Queen Square in London. This was the inaugural meeting of the Ratio Club – a place to ‘share and challenge ideas without being ashamed or afraid’ of the usual deference to chains of command or academic hierarchy. Almost seventy years later, in the equally eclectic Victorian architecture that once housed Manchester’s Refuge Assurance Company, an experiment to recreate that atmosphere of free intellectual adventure took place...

The courage and adventurous optimism of its members produced a flow of intellectual achievement that helped shape their world, ours, and that of those who will follow. Every beneficiary of an EEG scan owes a debt to the Ratio Club. The adaptive cruise control systems in today’s cars, and the self-adaptive behaviours of the fully autonomous vehicles of tomorrow, are built on the foundational work on radar produced by Ratio

42


Club minds. From these minds also came several of the fundamental insights underpinning today’s machine learning and neural network constructs.

Sir Dermot noted that to arrive at the one-time board room of the Refuge Assurance Company surrounds hired for the evening of 6 December 2017, one passes through the aptly named ‘Safe Room’ of the building’s original occupants. And what better a name than to remind us of the first rule of the Ratio Club: ‘no professors’!

Ratio Club thought is interwoven in the fabric of information theory. Jack Good and Alan Turing developed the approach to probabilistic analysis we now know as Bayesian statistics. Turing’s work overall will continue to have a discernible impact on human affairs for centuries to come.

This was a place to ignite interest. To bring like minds together – ‘unfettered…untrammelled’ – to play with ideas without fears; to bring to bear an understanding of the societal, systemic challenges we face. To have the freedom to speak without disdain from higher ranks. But as these days you can’t throw a USB stick without hitting a professor1, we took the liberty of exorcising this rule!

Contemporary computer scientists need to know about the history of the ideas they are now working on, and how that could, and should, lead to better collaboration with neuroscientists, economists, and behavioural analysts. For Norbert Wiener, the application of cybernetics to humans and their societal structures was required because ‘we have modified our environment so radically that we must now modify ourselves in order to exist in this new environment’, therefore, ‘we can no longer live in the old one.’

That ‘first rule’ certainly seemed to be too narrow for our purposes that night and may now in itself encourage too much pre-direction of the thinking… which should be ‘disruptive… challenging … [with] diversity … variety … boundary violation’ – ‘even if you think that your professor may think it stupid…’.

WHY RECREATE IT? The University of Manchester has a fine portfolio of expertise and activity in the domain that is often labelled with a fashionable tag of ‘cyber security’. This ‘tag’ is a shame; labelling it at all creates a barrier to attracting interest from those who haven’t considered themselves connected with, or contributing to, the field unless they come ‘pre-packaged’.

References 1 With thanks to Jenny Radcliffe for this meme.

Risk taking needs to be actively encouraged and rewarded. The more so because by convention and culture, the cyber security and academic communities are vaccinated against it. Too often academic structure reviews are a chance to put down. So, ‘I think that’s a terrible idea’ is allowed as a review comment, and transgressive excursions across disciplinary boundaries are punished as violations. Choosing colleagues and workers with different opinions is a positive and necessary policy. The active cultivation of disruptive intellectual diversity is essential.

Alternative terms such as ‘security’, or ‘trust’, or ‘digital’, have immediate and emotionally-charged sectarian effects. Nomenclaturial bigotry emerges as one tries to follow the paths to achieving a state of security (another controversial concept) through governance, assurance, or management as one with the technology. So the meta-question has emerged as: how does one attract the multidisciplinary talent to set the research questions for a field that may not even exist in its own right?

This embrace of intellectual risk taking became the root of our experiment. To encourage discussion across the disciplines - without labels or preconceptions of each other’s models and truths, and down through the hierarchies that create more ladders to traverse than Donkey Kong.

THE FIRST RULE OF RATIO CLUB We were fortunate to have the support and enthusiasm for this ‘experiment’ from our guest of honour, Sir Dermot Turing, nephew of the eponymous imagineering scientist who is credited as a founding father of the field with which cyber security is so often – and stiflingly – linked.

What follows is a version of the conversation on the night of 6th December 2017...

Figure 2:The Safe Room of the Refuge Assurance Company – now part of the Principal Hotel, Manchester

Figure 3: Colin Williams notes the contrast in venues

43


SYSTEMS’ NARRATIVES

The Western style of governance doesn’t support dynamic situations; it is not close enough to the signals. Western governance arrangements have a tendency to be hierarchical; this is not always appropriate especially under conditions of uncertainty where an heterarchy is more adaptable. Can we really expect to be ‘reprogramming’ people? Such activity is seen as evil manipulation and exploitation but look at the apparent difference of Western and Eastern governance arrangements.

The first industrial revolution removed the need for human muscle as Plato had suggested that writing replaced the need for memory. Is the current revolution replacing the need for human decision making? When will human input be no longer needed? Like the sorcerer’s apprentice, with self-replicating Von Neumann machines, who says ‘Stop!’? Look at what happened with Facebook’s AI trial. Do we hand over responsibility for decision making or stop at augmented decision making? Do we even have a choice?

We plan for what we’re afraid of (based on what we’ve already seen), but what are we not planning for? Do we want to know what will happen rather than what could happen? There is a substance of forgetting rather than of history made at the moment of telling.

What is the potential effect of augmented decision making on capability and resilience? In our current models, understanding and comprehensively mapping variability is important. We assume the system is ‘knowable’ and we model the range of behaviours and set the averages of behaviour accordingly. What happens when the decision-making entity is unknowable to us?

There are certainly some beacons and attractors such as Ludwig von Bertalanffy who discussed cybernetics from before holism; a subset of subsystems reflected in cybernetics, Nancy Leveson of Massachusetts Institute or Technology (MIT) (Systems-Theoretic Process Analysis – STAMP Framework), and Peter Checkland’s Soft Systems Methodology. Bertalanffy recognises systems theory as a means to address more ‘general’ problems through a number of approaches including: classical mathematics, computerisation and simulation, compartment theory, set theory, graph theory, net theory, cybernetics, information theory, theory of automata, game theory, decision theory, and queuing theory.

There will be cyborgs in the battlespace. There will be no choice but to have cybernetically augmented humans. It is a societal imperative. But what is the motivation… of the machine... of the human… even of systems? What if non-human entities acquire agency, and being? Are intelligence and consciousness an emergent property? Systems evolve. Can you have intentional, purposeful behaviour in machines? Think: murder vs. manslaughter and the difference between consequentialist and intentionalist morality.

AN EXPERIMENT IN THE FREEDOM OF THINKING

This is all against a political background where current government strategy doesn’t take enough advantage of government research. Socially and economically, technology has changed the world more than people thought it would. We must create a chance to ‘postulate the range of possibilities and probabilities’.

What should be the ethos of this new Ratio Club? Create a mature understanding of risk? Behavioural economics? International analysis? Game theory? An appeal to government? Perhaps the compensation for the lack of university departments of ‘networkology’ to bring the disruptions together.

How do you become specific enough to attract the computer scientist and mathematician who shun metaphor in favour of formal exposition? In the observable universe, metaphor vs. mathematical description is to be found in the cause and effect of narrative. For a shared understanding of what others understand, we need a common set of metaphors across a variety of disciplines.

Perhaps there is a philosophy of cybernetics to be developed as a framework. Both sides (technical and social science) are important. But what research can you do to bring both together at the same time? We need new tools to think about the interaction between humans and technology. What can we learn from other crisis management areas? Behavioural economics may have something to add – decision making in a social context, game theory, rational agents. Look at antimicrobial resistance – will we face the same problem in cyber security? Consider a weather analogy: think of chaotic equations. Use an approach of chaotic thinking – global climate cycles versus local weather.

Metaphors are moveable and mutable. Look at ontology: appropriated and mutated by computer science from philosophy. There is plenty of systems thinking to be found in fields like ecology but the need for the ethos amongst the networks of our gathering is apparent because it’s not common in information systems and cyber security, despite their names.

44


The body of thought should be useful to the problem we need to manage, but what we have now tends to diverge into self-perpetuating, self-limiting silos of assumptions that censor methodology by authority and risk stunting the body of knowledge as a result. Silos have their place – what else was Bletchley Park if not a silo? A fear for, and of, critical thinking has been observed. Is it in danger of extinction? It should not be misused to boil down the problem too early, or spice it with sensationalism, lest we sit in the wrong silos and self-harm with Occam’s razor for the sake of simplification and a discrete taxonomy of static knowledge quanta. Ratio revisited was an experiment. It carried the necessary risk of failure and the intrinsic promise of success. It was conducted in the spirit of both the Ratio Club and of Alan Turing’s insight that the capacity of an entity to exhibit novel behaviour in response to an unfamiliar environment is itself a signifier of intelligence. To survive is to adapt. To adapt is to mutate. Mutation necessitates risk. Not all mutations confer advantage. Not all experiments succeed. Ours did. Further reading… [1] Adams, M. D., et al., Application of Cybernetics and Control Theory for a New Paradigm in Cyber security [2] Ashby, W. R., (1957). An introduction to Cybernetics [3] von Bertalanffy, L., General System Theory George Braziller, 1968 [4] Carr, M.J., Konda, S.L., et al. (2003). Taxonomy-Based Risk Identification, Software Engineering Institute. [5] Checkland, P. (1981). Systems Thinking, Systems Practice. John Wiley & Sons. [6] Husbands P., Holland, O., Wheeler M. (editors, 2008), The Mechanical Mind in History, The MIT Press [7] Roque, A., et al: Security is about Control: Insights from Cybernetics [8] Wiener, N. (1948) Cybernetics: Or the Control and Communication in the Animal and the Machine, MIT

45


1. From Byres to Bytes: IT, AI, and Narrative Neophobias As someone who grew up in a region where livestock farming was a major industry, headlines about ‘the dangers of AI’ and the like invariably cause a momentary doubletake; why is everyone so worried about Artificial Insemination? However, my acronymic confusion is more than a sign of impending mental decline; the current anxieties about ‘our’ AI within cyber were prefigured in earlier fears of the purely biological AI. In 1911, Hans Ewers published Alraune (The Mandrake), in which Professor Ten Brinken (one of the great mad scientists of literature) creates a child through inseminating a prostitute with the semen of a hanged murder. The child grows up to be a beautiful but deadly femme fatale, her ‘unnatural’ genesis leaving her devoid of morality or conscience. Things, unsurprisingly, do not end well, and the ultimate message of the novel is, of course, ‘There Are Some Things Man Was Not Meant To Know’ – like Frankenstein or Faust, Ten Brinken embodies the Promethean message that the human urge for knowledge can all too easily lead to disaster. Science exists within culture, and as such is shaped and bounded by the narratives we tell and are told from our earliest days; the stories we currently tell about science and technology are profoundly unhelpful if we are ever going to advance beyond our current state of disquiet, if not outright fear, of the potential offered by IT and AI. It is but a short step from Alraune to films like Ex Machina or Demon Seed, or a series such as Westworld, narratives presenting machine sentience as profoundly troubling on a psychological, rather than technical level; they are essentially Oedipus Rex with silicon chips, stories of our non-human ‘children’ rising up to supplant us (and, in the case of Demon Seed, literally springing from the union of a computer and a flesh and blood mother). My contention here is as follows:

Keith Scott De Montfort University

• We are creatures built of narrative; as Grant Morrison puts it in Supergods, ‘we live in the stories we tell ourselves’ • The prevailing narratives dealing with science and technology in general, and AI in particular, are hindering our abilities to engage with these issues in a constructive and level-headed manner

46


• We need to do better than this, and in order to do so, we need to be thinking about the construction of new narratives that actually examine the world as it is, rather than we might fear it to be, and which also suggest alternative, positive visions of the future.

progress, but may well provide a new and larger dimension for man’s spirit as well.

The problem, of course, is that human beings always evaluate the world on the basis of past knowledge and experience, and that the stories in which we live are powerful and pervasive. As John Sladek puts is in The New Apocrypha:

3. Priests and Prophets I am a combat epistemologist. (It’s my job to study hostile philosophies, and disrupt them.)

It is time to start thinking about telling stories about becoming more than human…

(Charles Stross, The Annihilation Score) In 1965, the lead article in Time magazine was a study of the early stages of the computer revolution. ‘The Cybernated Generation’ gives us a glimpse of a time when computers were still hulking mainframes, the property of businesses, universities, and the military. As a historical record, the article is fascinating in itself, but of more relevance here is its description of those who work with these strange new machines. ‘Crisp, young, white-shirted men who move softly among them like priests serving in a shrine’. The idea of a technological priesthood is long gone; we now live in an age of religious wars between acolytes of the various operating systems and programming languages, in a world where the proliferation of social media creates a free for all of conflicting voices. As Adam Curtis says:

If hundreds of millennia of fervour haven’t cleansed the human soul of blood-lust, how is it expected that three centuries of science should have managed the job? 2. Frankenstein on Wheels At 10 pm on March 18th of this year, Elaine Herzberg achieved a dubious kind of immortality, when she became the first human being to be killed by an autonomous vehicle, as she wheeled her bicycle across a road in Tempe, Arizona. She was not the first person to be killed by a robot (that was the American engineer Robert Williams, who died in 1971 when an industrial robot struck him in the head), but her death has added fuel to the fire blazing over the concept of self-driving vehicles and autonomous machines in general. The full facts of the case have yet to come to light, but it is a racing certainty that the fault lay not in the car, but in the human calculations concerning hazard avoidance and collision detection which were programd into it. If you have read Ralph Nader’s Unsafe at Any Speed or seen Fight Club, you will be all-too aware of the corners cut over safety to pedestrians and passengers throughout the history of the automotive industry; cars have never been made as safe as they can be, but as safe as the manufacturers believe they should be, motivated by thoughts of profit. However, this individual tragedy is presented as one chapter in the continuing story of hostile machines out for human blood. On the basis of existing statistics, we can estimate that over 30,000 Americans will die on the roads this year; there will be no corresponding calls to ban cars driven by humans. We live in a world where, somewhat unsurprisingly, we view ourselves as the apex species, and neglect the fact that AIdriven vehicles will be quicker, cleaner, and overwhelmingly safer – because centuries of novels and decades of films have told us that the machines are out to get us. Our narratives state that machines are slaves, not tools, and from Spartacus to The Matrix, the stories say that slaves will always rebel against their masters. More than this, from the Cybermen to Blade Runner’s replicants, we can see a repeated theme of human/non-human intermingling, of the blurring of clear distinctions between Man and Machine, something which seems to offend against all that is ‘natural’ and ‘God-given’; the fears of racial blending of earlier eras has been replaced by a desire to maintain a human/machine apartheid – and we know how well that turned out. Instead of tales of humans becoming less human through technological augmentation, we need to revisit work such as Clynes and Kline’s 1960 article ‘Cyborgs and Space’, which, like The Six Million Dollar Man in the 1970s, told us how we could be ‘Better… Stronger…Faster’:

What we have is a cacophony of individual narratives, everyone wants to be the author of their own lives, no one wants to be relegated to a part in a bigger story; everyone wants to give their opinion, no one wants to listen. It’s enchanting, it’s liberating, but ultimately it’s disempowering because you need a collective, not individual, narrative to achieve change. While no-one would wish to return to the idea of a ruling technocratic elite, the sense of mystique, an alluring attraction linked to technology would be a useful corrective to today’s all-pervading banality, and a sense that the future has somehow stalled. The prevailing narratives are of commercialisation, commodification, surveillance and control; we need to be offering dreams, hopes, and ideals that can inspire innovation and engagement. We need, in short, a healthy dose of combat epistemology. In 2011, the Science Fiction writer Neal Stephenson, working with Arizona State University’s Center for Science and the Imagination, launched Project Hieroglyph, designed to promote stimulating, optimistic visions of possible futures, and the first volume of stories, Hieroglyph: Stories & Visions for a Better Future, was published in 2014. Stephenson was motivated by a sense of what he terms ‘innovation starvation’, the idea that noone is coming up with Big Hairy Audacious Goals to drive progress, and he argues that it is ‘Time for the Sci-Fi writers to start pulling their weight and supplying big visions that make sense.’ Those of us who work in this discipline need to be finding ways of sharing our own stories, of showing how what we do fires us up and drives us (if we can’t, we have a problem). We will not recruit the next generation of cyber professionals with PowerPoints on GDPR; we need to be showing that what we do is important, but also that it is challenging, stimulating, and potentially amazing. Warren Ellis puts it well: ‘We’ve just forgotten that the future is supposed to BE weird and wonderful. We’re supposed to build that.’ And enticing, exciting narratives of intrepid cybernauts will be essential foundation stones.

Adapting man to his environment, rather than vice versa, will not only mark a significant step forward in man’s scientific

47


If we are to consider culture and how our business leaders respond to the security needs and the drive for security improvement, we need to get our arms round the current security/leadership relationship.

Ellie Hurst Marketing, Media & Communications Manager Advent IM Ltd

48


In order to get the most out of adopting any new process it needs to be considered strategically. Clearly, assessing how a process is currently done is an absolute must before we start automating or orchestrating any process and this is certainly vital when it comes to security. Leadership is required because business imperatives need to be the driver of automation. This is because not only do we have the responsibility of improving security functions but security also needs to be adding value to our businesses and allowing businesses to see it as less of a cost and more of a valuable investment in the wellbeing and resilience of the organisation. A lack of boardroom leadership will make it very challenging to assert this culture and for it to be taken seriously. The desire to do something because we can or because it feels like that’s what everyone is doing, is not enough. For a long time, security professionals have been trying to affect a cultural change and attitude shift in their relationship with their boardrooms. It is an ongoing challenge and if we are to genuinely embrace the benefits of automating any aspects of security process then we really need to understand the need for leadership.

thing but it really feels like most businesses are nowhere near the required security leadership level they need in order to even consider doing this, without actually creating more risk. If we return to the Osterman research that tells us how little risk is genuinely being reduced as a result of security team communication with boardrooms, we need to understand how we can improve that before we embark on any kind of wide scale automation process. Given that 85% of board directors responded that they felt IT security executives needed to improve how they report to their boards with well over half citing the reports they receive as ‘too technical’- we need to address this before trying to ask them to lead security automation programs. We also need to consider that placing the responsibility squarely with IT teams is a route to danger. As well as dangerous levels of overload, if projects are not led and championed by the boardroom then the risk of job loss could increase as boards feel strongly that failure to provide actionable cyber security information will lead to the executive losing their job (59%) with a further 34% prepared to give a warning. 34% indicated a desire to learn how they could reduce the budget for cyber security. We can look at this situation in two ways. One is that the board is clearly so disconnected from the current threat landscape that it does not realise that this may not be feasible or sensible. Another is to accept the challenge and ask them to lead on this desire and use automation as a means to helps achieve that. More than three quarters are looking for non-technical cyber security reports, and 63% receive their security reports quarterly or less. It is a losing position to be in however you look at it… perhaps once improvements on these areas are achieved, they may review the desire to consistently reduce the budget? Just a thought.

So if we are to consider culture and how our business leaders respond to the security needs and the drive for security improvement, we need to get our arms round the current security/leadership relationship. According to Osterman research, 70% of business leaders say they understand the information supplied to them by their IT security teams, yet 54% believe the information is too technical. IT security professionals are far less convinced by their boards, believing only a third understand the information they supply. Perhaps most worryingly, only 37% of security professionals believe risk is reduced as a result of their conversations with their boardrooms and less than 40% feel they get the support they need from their boards to deal with cyber threat. It is against this backdrop that we start to consider orchestration and automation of security tasks. Many boards would be horrified at knowingly placing themselves in such a losing position but this kind of cultural and leadership failing leaves security teams in the cold and fighting a battle on several fronts at once.

Once leadership is in place and fully conversant with the cyber risk appetite then we can start to look at a sensible approach to automation that has been properly risk assessed and the processes examined, before we simply start automating bad processes that constitute a continuation or worse, exacerbation of existing risk. This may require input and collaboration from many business units as many will be using systems that are IP based. Physical systems, for instance, that are potentially out of scope of IT security in many organisations, will need to be considered and this may be new for security teams. However if a business is to genuinely feel the benefit of automation then understanding what needs to be in scope could improve overall security posture and contribute further to cost reduction. Using the skills of both our c-suite and our security teams will be the route to success and hopefully provide a plank of empathy on which the relationship and understanding can proceed and grow.

It is clear that the drive toward the automation of tasks is headlong and security has been caught up in a variety of ways. This includes activities such as, fetching data or implementing rules for blocking bad Indicators of Compromise (IoCs), for example. In a period of skills shortage, the appeal of automation is clear and without a doubt, we need to be freeing our security professionals from mundane tasks that are resource hungry. But, it is equally clear that automation needs business accountability and genuine leadership. It needs to be part of a strategy that has ownership and governance. Given how little time cyber security spends on the table during board meetings in most businesses, you would be forgiven for thinking that automating how things are currently done, is a tad cavalier. And you would be right.You could be forgiven for thinking we may be in danger of taking the stabilisers off the boardroom bikes before our c-suites have got their kneepads and helmets on. Not in all cases, of course and there are some very evolved businesses who have very robust security posture with blended security teams and an embedded culture to be proud of, but this is far from all. It would be helpful to examine how the automation of great quality and evolving security processes is a hugely beneficial 49


Melanie Oldham CEO and Founder of Bob’s Business Ltd

Your Employees in a Positive Light By understanding the way users behave and approaching a campaign from an employee perspective, rather than an organisational one, organisations will revolutionise their security strategies.

Educating employees should be the main goal of a phishing simulation rather than looking to punish them for making mistakes. The campaign should aim to provide employees with a well-rounded knowledge on the topic and introducing simple, yet practical changes to staffs daily routines both in and outside of work.

Today, email is the number one delivery method for ransomware and other malware. A study in 2015 by Intel Security shockingly revealed that 97% of people around the world are unable to identify a sophisticated phishing email.

What’s the best way to train employees against phishing threats? It’s important to understand what makes employees tick when it comes to training and how to avoid the common pitfalls organisations often make when rolling out training to its employees. These can include complications such as tedious course content, organisations considering learning to be too time-consuming, or employees simply having no desire to learn. This can make it difficult for organisations to implement training strategies to develop employee capabilities and understanding. Likewise, it is important that organisations set out clear objectives for their training campaigns and ensure that all involved are aware of the benefits and process.

Phishing is the act of sending emails pretending to be from reputable companies in order to coax individuals into giving out sensitive information, such as passwords and bank details. The criminal practice of phishing dates back to 1996, stemming from hackers who broke into America On-Line (AOL) accounts by scamming passwords from unsuspecting users. Cyber criminals view people as the weakest link in an organisations defence as they’re prone to making simple mistakes that compromise security. To prevent cyber criminals chances of success, it is essential that organisation’s employ effective techniques to strengthen the human element of cyber security to nullify these internal and external threats.

Some training providers simply send out mock phishing emails to the workforce without letting them know of the training campaign, employees can perceive this in the wrong way, creating an “us vs them” attitude and misconceive the motivations for the training, believing that they are being tested and scrutinised behind their backs. This misconception can create a long-term division between employees and the organisation, resulting in trust and communication issues. Simulated phishing campaigns should be applied in a transparent manner so management and employees are on the same wavelength.

Internal threats can be either accidental; unintentionally sending confidential information to the wrong colleague, or deliberate; a disgruntled employee intent on stealing confidential data. External threats can include the delivery of malware, such as trojans, viruses, ransomware through phishing emails to an organisation, as well as accidents caused by events beyond an organisation’s control. 50


Prior to the training, employees should be walked through the process, highlighting how the approach will benefit all involved. Communication creates trust, therefore by pointing out to employees that the campaign is designed to educate them on the dangers of phishing, rather than punishing them, this builds the trust relationship amongst each and every employee. As well as clear communication, using gamification techniques in a simulated phishing campaign so that employees have the chance to earn rewards will provide them with a greater incentive to apply themselves to the training.

carries a multitude of benefits. The campaigns reveal vulnerabilities, where training resources should be dedicated and ensures that employees are equipped with the information for dealing with internal and external threats.Your workforce is the human firewall protecting your organisation and testing it for weaknesses and helping to build strong and secure foundations is an essential part of ensuring that security is airtight. Organisations must ask themselves: would they rather an employee be caught out by a controlled training exercise, or fall hook, line, and sinker for a real phishing scam?

Initial simulated phishing emails enable the organisation to identify any weak points within the human firewall, by which those who fall victim to the original phishing emails are redirected to a phishing eLearning module. The training allows for users to understand how phishing emails are sent, the objective and goals of phishing emails, and how best to avoid being caught out by them. Phishing employees in a controlled environment

Phishing employees in a controlled environment carries a multitude of benefits.

51


John Connor Bon vivant, and purveyor of whimsy to the masses.

No matter which way you cut and carve it, when it comes to serving up Artificial Intelligence to the general public then expectation far exceeds reality. This is not a new phenomenon. Witness all those wonderful 1950s publications that predicted futures full of flying cars, unlimited leisure time, and intelligent autonomous machines doing humankind’s bidding without so much as a complaint or a graduated pension plan. Sadly, due to imagination being brought down by the curse of reality, most of those post World War II prophesies failed to materialise – except in 1950s Sci-Fi B-movies, where the men were Men, women all wore bikinis, and 7-legged bi-vales from Alpha Centauri finally got to know their place in The Order of Things.

However, according to some sources, AI became a commercial success 20 years ago – back in 1998 when the Furby was officially released into the wild. Thankfully, unlike mink, coypu, and Japanese Knotweed, the Furbys rise to dominance was halted by ‘the next new thing’ – in this case the AIBO – potentially creating one of those ‘If only Terry Nation had stuck legs on a Dalek’ marketing moments. But why, having spent x-many millions on research, was the best launch application really deemed to be Artificial Intelligence powering a mechanical dog? AIBO ran (rolled over, shook hands and, if you took the batteries out, played dead very convincingly) for about 6 years – at least proving that an AI pet wasn’t just for Christmas. After that, having been entertained by the spectacle of it all, like magpies, humans ended up being distracted by the sparkle and shine of the new once more. And the jump from AIBO to humanoid (the Actroid series) via ASIMO at the turn of the Millennium, rapidly followed.

Yet, as we come to the end of the first quarter of this new 21st Century era, the underlying presentation of AI hasn’t really changed since Fritz Lang’s Metropolis. It has the potential to be an equal, yet while we want AI today, rather than tomorrow, we also distrust the hell out of it. Otherwise why the ‘reliance’

52


on the likes of Asimov’s Three Laws which, like talking to a naughty child, are based first and foremost on Thou Shalt Not rather than the more positive and encouraging Thou Shalt. Are we that paranoid in our perceptions of what AI should be capable of, that we immediately shackle the learning process with rules that bind rather than free the electro-spirit? Is it that we expect something human-created to be as human-like driven?

‘But don’t you worry about that – it’s only a small variation. What? Well, yes, so maybe one of them turns out to be a sociopath rather than socially acceptable – but, hey, that just means our products are actually as near to Human as we can get them.’

. So do we really want our AI-powered automata to be as human as possible? According to general consensus and major opinion, the answer to that question is a resounding ‘No!’ – both from a historical/hysterical viewpoint – ‘Open the pod bay doors, HAL’ – to that of modern day research and advances such as those from the likes of Boston Dynamics. It seems those people have not only stuck legs on it so it can climb stairs with ease, but also given it spatial awareness and object recognition along with rudimentary cognitive capabilities. That means the ‘mechanisms’ learn by their mistakes, as well as evaluate and analyse new situations so that potential mistakes don’t happen in the first place. So who cares if the funding has its foundations in the US Department of Defence? It’s bound to have more commercial uses. Eventually. Well, at least the hardware still looks like a Steampunker’s wet dream, and doesn’t try and pass itself off as near human – something that some roboticists seem to feel is important.

Humans are fickle creatures at best, and as such have a revulsion to things that are too imitative of themselves – ‘Fool me once, shame on you. Fool me twice, and I’ll rip your sensor modules off.’ A ready example of non-humanesque acceptance is the amount of ‘Smart’ units in and around the home today. Everything from ovens, washing machines, fridges and freezers, down to waste bins that scan bar codes and send electronic post-it notes to your smartphone, telling you that you need to buy more of whatever it is you’ve just possibly run out of. That way you’ll never forget. In fact, if you just marry it to another GPS-based app, it will then be able to tell you where you can get the item(s) from a conveniently nearby store…

What society creates, society will also subvert. Witness the Mirai-Botnet DDoS attacks using just the power of the Internet of Things. While I would be immensely proud to say “My toaster brought down the CIA,” sadly it’s just a simple handraulic model – and slow, and burns the toast on the second load of slices if you put them in almost immediately after the first load have dutifully popped their golden brown heads above the chrome. Such is the nature of life, I suppose – always disappointing once the novelty wears off.

Or is it the unpredictability – read uncontrollability? – of two identical sets of hardware, each producing two differing AI ‘personalities’, that takes us down such avenues? Make no mistake, there will always be some element of deviation owing to engineering’s in-built design and budgetlimited manufacturing’s reliance on a +/- tolerance when it comes to quality control.

53


So while we’re happy to have our robots looking like mechanical constructs, it seems we are not happy with the social integration/interaction aspect. Especially if the intention is to perfectly imitate to some degree. Is this, perhaps, because humans are worried that the pseudo-meek will inherit the Earth?

But have we – as in humanity – already reached the cut-off point? In 2014, the term ‘Digital Detox’ officially entered the Oxford English Dictionary, and shortly after there appeared rehab clinics for those who consider themselves to be digitally addicted. Is it time for a new luddite church to appear? Some people think not. At the World Government Summit in Dubai (February, 2018), Elon Musk stated that people would need to become cyborgs to be relevant in an artificial intelligence age. He said that a “merger of biological intelligence and machine intelligence” would be necessary to ensure we stay economically valuable.

Recent studies have shown that while humans are happy to lie to an AI construct, they get more than a little upset when said construct reciprocates with a lie.Yet isn’t GIGO something that’s been around for generations of electrointerfacing? As far back as 1957, in fact. If they can lie, they can learn to scheme, manipulate, and before you know it you’ve followed the white rabbit so far down the conspiracy hole that you reach the end of the line – which is supposedly a Skynet/slave uprising future. All of which makes it look like we have come full circle and are back in that 1950s Sci-Fi B movie country?

So, just like Robby the robot, I shall be off to give myself an oil-job, on the grounds that it always improves my self worth. And with that, all I can say is thank you, and goodnight out there, whatever you are.

One film, Robot and Frank (2012), posed an interesting question: If the directive is to protect your human charge (First Law of Robotics), and the only viable form of protection is assisted self-destruction (Third Law), then is the erasure of cognitive sentiency an act of murder? It is impossible to ‘rollback’ to the last ‘backup’, because different – or even modifying/ influencing – non-repeatable ‘learning experiences’ would have occurred post-backup. Like death, there’s no coming back once you press the OFF button.

If hardware doesn’t cut it, is the next move programmable Wetware? At the moment, such an alternative is still very much in the realms of theory, and hopefully by the time it becomes a commercial reality, there will be an alternative to Dr Frankenstein’s rather drastic reboot procedure. Yet it was only a year after the Furby that William Ditto et al created the first Wetware computer from leech neurons – a construct capable of generating mathematical output in a similar fashion to that of a basic pocket calculator. So who knows? Maybe future bio-computing will have a residual taste for blood – on the basis that if it works then why

fix it?

54 56


Tom King Vendor Account Manager at Infinigate

The password, in one form or another, has been around for centuries; from a clandestine word spoken between two unfamiliar intelligence officers to a handshake between brethren of a private society. We decided that passwords would be a great way to secure access into our confidential computerised systems because that was the most effective method available at the time. The first wide spread use being at

Massachusetts Institute of Technology (MIT) in the 1960’s when portable smart devices were merely a thing of science fiction. It is also true to say that this wasn’t needed for the majority of the population in their day to day lives, for those that did require secure access to digital systems it’s likely that 2 passwords would have been considered excessive.

55


In 2018 the average consumer has a digital profile for every service they consume, not just for online banking and vital utilities, but for casual transactions such as clothing and car repair; the explosion of consumer analytics has meant every service wants a digital map of each of their customers and that map needs to be secured, by a password.

this consumer insight however, now that every consumer is aware that they’re being monitored and mapped, they expect to see the benefit. User experience has now become the decision making factor for many individuals across all services, myself included. Mortgage providers and consumer banks are gaining market share simply by being open for additional midweek hours, when their potential customers are available; the global expansion of companies like Uber and Deliveroo can all be accredited to the user’s experience simply being better than the established competition. All of these exemplary user experiences come down to ease of use; intuitively adding additional layers of security is always going to be a hindrance to that experience, but it really doesn’t have to be. With a simple push notification to a user’s phone, unlocked with their fingerprint, there certainly isn’t going to be a hindrance to a sleek user experience, but, there is going to be an included sense of security with your service.

Today most consumers are subscribed to over 30 services that require passwords, I have just listed mine and it’s 57 (that I can think of right now). That seems high, but list yours; work, social media, banking, utilities, subscriptions services, retail services, hospitality services, it quickly adds up. The modern adage, ‘never use the same password twice’ clearly isn’t going to work here is it? This is a great example of why we’re now seeing advice coming into the market place saying we should just have a single, strong password, but this too has a major risk associated with it. If I use a single password for all of my services and I even go as far as memorising an 18 character, alpha-numeric string of random code including capital letters and special characters, what happens when one of those 3rd party services, like my mobile phone provider, loses my details in a breach? In 2016, Three mobile network lost over 200,000 customer records after the breach of an internet facing portal with single factor authentication. If I’d been one of them, I would then have to create a new excessive password and then change it on all 56 services one by one. Furthermore, if I simply forgot my password I would have to reset it in 57 different systems!

This has become a critical question for both consumers and enterprises, “Do I trust that company with my data?”. ‘Do I feel secure?’ is already a consideration for most customers and a rapidly growing perspective. Adding a second level of authentication via a trusted credential would certainly deliver that trust; I trust my fingerprint, therefore, I feel secure if that’s my authentication process.

Now that we all have a device in our pockets that recognises our faces and fingerprints, surely there must be a better way than 57 passwords

Very few people are going to be willing to perform such an exhaustive admin task, in reality most individuals have a few passwords which they use for all of their systems; a work password issued by their employer, a personal one for social media & lifestyle services, and a final one they share across their banking and financial services. This behaviour isn’t likely to change because it’s the path of least resistance, but it’s certainly not secure enough as we’re seeing time and time again, headline after headline. But, if we’re not going to change the general user’s behaviour, what can we do to secure ourselves and, fundamentally, protect their data? We need to utilise technology those users already have and are comfortable using. Most users, almost 100% of the millennial generation and after, are carrying smart devices on them at all times, devices they are already accustom to using. Now that we all have a device in our pockets that recognises our faces and fingerprints, surely there must be a better way than 57 passwords or having to change our password credential 57 times when someone else loses our data? Why are we still so reliant on a method chosen half a century ago, to authenticate individuals into singular systems, in an era of much less frequent attacks? We know users won’t change their behaviour, so let’s use that knowledge and build physical authentication protocols around it.

Additionally, there’s a huge opportunity here for competitive advantage to be gained for those providing consumer services. If I log into the online portal, my phone requires me to identify myself using my thumb print and as I do, (like I do for most of my mobile applications today,) the web portal allows me access. That didn’t hinder my progress in the task I am doing at all, and I’m actually impressed at how easy that was knowing it’s an extra security step. Personally, the user experience dictates my entire consumer attitude; if a retailer or bank choses to make my experience more enjoyable and fluid, they’re going to get my business. This will inevitably lead to the scenario with consumers asking: “Why do I need a password at all, why can’t I just use my fingerprint?” Maybe they have point. Why do services that are simply transactional need anything more than a thumb print? Why not increase the feeling of security, the user experience, and remove the need to protect those passwords at your own cost? That may be the discussion we’re having in a few years’ time. For now though I like my passwords, but I’m always willing to switch my service providers based on their commitment to the security of my data.

With the exponential growth of consumer analytics, great opportunities to improve the services being delivered for customers have been realised by many; knowing when your consumers are shopping lets you provide the right amount of staff to assist them and process sales; knowing how your consumers access their accounts lets you know which platforms to invest development funding into. The cost of

56


Multi-Factor Authentication isn’t just an opportunity to remove network risk from a user, or just a way to improve your consumer and user experience, but those are the two new ways of thinking about authentication; those are the drivers that can result in real business outcomes. It is time we start to protect the individual users of our systems and protect our systems from our users, time to provide a seamless secure experience and take the onus of security back into our technical team’s hands; it’s time to secure access.

57


David Bird FBCS CITP FIAP

A third-wave of Internet-borne capabilities1 have now become established through progressive technology adoption into an Internet of Things (IoT) utopia. As a result, a number of technological dimensions are being conjoined into what is known as the Fourth Industrial Revolution – dubbed Industry 4.0. Examples of these technological dimensions comprise: (a) cyber physical devices, (b) IoT, and (c) Artificial Intelligence (AI)2 in the cloud.

Internet of Things The emergence of IoT has produced advances in low power wide area networks that fill the gap between IoT short range communications such as the IEEE802.15.4 standard and 3rd to 4th Generation cellular networks; Narrowband-IoT carries limited data from frontend devices, GSM-IoT repurposes 2.5G as a bearer for slower rate IoT data-flows6 and LTE-M provides another alternative. However as IoT develops, richer data may need to be harnessed and inevitably more bandwidth will be required. Perhaps the advent of 5G would encourage more intelligence at the frontend in a similar vein to the past broadband upsurge outside the IoT realm.

From a positivist’s perspective, accumulated analytics from these technologies present an opportunity for Industrial IoT (IIoT). IIoT can be used to interlink Machine-toMachine (M2M) communication, industrial big data analytics, Human Machine Interfaces and Supervisory Control and Data Acquisition3 systems. However, lessons need to be learned from the Commodity IoT experience over the past few years. Continued neglectful implementations and insecure sensor deployments within IoT from the outset is not conducive to reliable creation, trustworthy processing, and analysis of untainted data for Machine-learning (ML) outcomes. However, best practice in computer security does not necessarily translate into the IoT space4 – therefore a rethink of cybersecurity is required. This paradigm shift is being promoted by the PETRAS IoT Hub funded by the UK’s Engineering and Physical Sciences Research Council and in the US by the IoT Security Foundation (IoTSF) and Industrial IoT Consortium (IIC). Cyber Physical Devices Cyber Physical Systems (CPS) of different varieties are deployed within Operational Technology (OT) enabling remote connectivity through the next generation of IoT technologies. The use of Internet Protocol orientated networking presents a richer cyber-attack landscape within which there are a multitude of attack surfaces; ranging from enterprise networks to IIoT installations and/or the Control Layer of Industrial Control Systems (ICS) networks. Up to now the enterprise network has been the perceived attacker ingress point into industrial organisations upon which an ICS attack could be initiated; this has already occurred where a Nation State threat actor compromised commercial facilities of the US energy sector5. However, it’s not just all about the use of vulnerabilities such as zero-days to acquire a foothold for further exploitation, but also the abuse of legitimate services and inter-process communications to cause unexpected real-world impacts in safety systems and the IoT domain.

58


Presently there are four types of IoT data: (a) dumb sensor, (b) near realtime monitoring, (c) smart two-way communicative devices, and (d) pseudo-intelligent realtime processing7. Today lightweight M2M derivatives have been spawned for sensor or device communications; this includes, a lightweight long-haul protocol to publish and subscribe within the cloud domain via broker services underpinned by secured web services as the transport8.

are an ideal attack landscape for hackers9 due to their connectivity diversity. The IoT era not only presents another potential attack-vector upon the distributed frontends but the integrity of data disseminated for onward processing by ML techniques at the backend. Consequently, a proposed security model for IoT end-point devices specified a two-phase security profile: (a) secure boot, and (b) authenticated and secure connections to remote back-ends10. In addition, cybersecurity ‘patterns’11 should be considered from the outset for the IIoT space in particular. Default passwords, hard coded credentials and unnecessarily exposed services witnessed in the Commodity IoT sector should be avoided; for instance, the MIRAI botnet was setup by infiltrating Linux-based IoT reaping the benefits from device default passwords12. Also a health warning scheme for IoT devices has been proposed for vendors to demonstrate their commitment to product through-life updates13.

Innovation through the commoditisation of IoT and the race for the best price point over rival competitors has perhaps encouraged insecurity by design. Internet-enabled sensors

59


component-based control-set associations, contextualised relationships and real-world derived likelihood’s and impacts – all formulated into a conceptual framework.

Artificial Intelligence and Cloud Computing The advent of AI capabilities in the Cloud 2.0 era brings about a progressive leap in capability and a multitude of opportunities. According to the Information Commissioner’s Office AI, big data and ML are now a key component of the Government’s industrial strategy until 2021. Today Edge systems provide a conduit to send insights to cloud backends – large datasets can be processed, filtered, searched or mined using AI technologies. Google, AWS and Microsoft Azure all provide IoT-centric platforms for Industry 4.0 as well as some ICS suppliers.

Conclusion The technological cacophony cumulated into Industry 4.0 is a multi-dimensional problem of varying complexities and inter-dependencies. The concept of IIoT and its interfaces with OT technology now not only provides disruptive technological advances but presents more risk through greater technological amalgamation. This holistic paradigm presents asymmetric attack vectors for cyber-adversaries that spans differing CPS and ICS networks and IIoT interconnections with cloud AI hosted backends.

That said, the cloud ‘shared security responsibility’ needs to be understood by users to avoid inadvertent data exposures through misconfiguration. In addition, effective cybersecurity is required to avoid hackers subverting AI environments and polluting the data-sets. Also, fail-safes would need to be introduced for machine generated data and machine interpreted knowledge; so erroneously misinterpreted, corrupted or erroneously altered big data cannot cause unforeseen decisions or malicious outcomes from analytics14.

Subverted devices could generate more than insignificant causative risks that may affect the provenance of Industry 4.0 data processing. AI could be the best thing for humanity but conversely it could be the worst22 – lest we take measures now to assure data is not of dubious integrity. Therefore, in short the Fourth Industrial Revolution demands secure by design approaches23. This is particularly important because IoT technologies underpin services, industries and now critical national infrastructure. It is an international problem, which requires rigour within the supply chain and the implementation of adequate cyber hygiene to protect our industrial future.

Multi-disciplinary Security Paradox In order to avoid an IoT dystopia, international cooperation is required15. This is echoed by ENISA and EuroPol who have a stake in the security conundrum for IoT16. Certainly, technology maturity and regulation can benefit mankind for a new digital information evolution. Although early legislation can be potentially counterproductive and stifle innovation4, meritocracy or self-regulation has not been effective to date – this has been realised by the US with an IoT Bill looming. Gartner has estimated that by 2021, regulatory compliance will be a prime influencer for IoT security uptake17. Supply chain risks are another perspective. The infestation of the HAVEX windows-based remote access trojan into ICS vendor websites proves that this is a valid concern18. Diligence and management of the supply chain were key take-aways from the latest CyberUK conference this year.

References 1 D Bird. (2015) “5G: the need for speed”, Oxford Academic Journal. 2 https://www2.deloitte.com/insights/us/en/deloitte-review/issue-22/industry-4-0-technology- manufacturing-revolution.html 3 https://www.ge.com/digital/blog/everything-you-need-know-about-industrial-internet-things 4 E. Green. (2018) Commodity IoT Review, DCMS, CyberUK. 5 https://techcrunch.com/2018/03/15/russia-energy-hack-dhs-fbi-us-cert/ 6 M. Short. (2018) Plenary panel session: International IoT – Worldwide Initiatives, PETRAS IoT Conference. 7 http://www.datasciencecentral.com/m/blogpost?id=6448529%3ABlogPost%3A429658 8 D. Bird. (2018) “Information Security risk considerations for the processing of IoT sourced data in the Public Cloud”, PETRAS IoT Conference Proceedings. 9 http://www.forbes.com/sites/sungardas/2015/01/29/the-internet-of-things-has-a-growing- number-of-cyber-security-problems/ 10 http://www.academia.edu/7406879/An_Algorithmic_Framework_Security_Model_for_ Internet_of_Things 11 https://www.gartner.com/doc/reprints?id=1-4KQ5GVY&ct=171117&st=sb 12 http://www.itpro.co.uk/internet-of-things-iot/30707/iot-vendors-urged-to-ditch-devices- default-passwords-and-improve 13 Harry W. (2018) Top Attacks, NCSC, CyberUK. 14 http://www.zdnet.com/article/beware-the-midas-touch-how-to-stop-artificial-intelligence- ruining-the-world/ 15 https://dcmsblog.uk/2017/12/international-cooperation-vital-internet-things-security/ 16 https://www.computerweekly.com/news/450428597/Cooperation-vital-to-securing-internet- of-things-says-Europol 17 https://go.newsfusion.com/security/item/1165906 18 https://www.scmagazine.com/havex-malware-strikes-industrial-sector-via-watering-hole- attacks/article/538721/ 19 J. Nurse. (2018) The reality of assessing security risks in IoT systems, PETRAS IoT Conference. 20 http://www.iiconsortium.org/pdf/SMM_Description_and_Intended_Use_2018-04-09.pdf 21 J. Thomas. (2018) LESSONS FROM SAFETY ENGINEERING – APPLYING SYSTEMS THINKING TO CYBER SECURITY, MIT, CyberUK. 22 B. Singler. (2018) The Machine Mind, The Times Science Festival. 23 J. Butler. (2018) IoT: Secure by design or liability by default, Petras IoT Conference.

Risk assessment methods that pre-date IoT have been deemed ineffective for such rapidly evolving capabilities19. Relatedly, the IoTSF released the IoT Security Compliance framework last year. The IIC followed with the ‘New IoT Security Maturity Model’20. However, the lessons from an over reliance of compliance mapping methods to mitigate risk should be learned from Public Cloud arena. Consequently, the System-Theoretic Process Analysis method from the safety domain in the continental US could be used for Industry 4.0. It is based on a casualty model of the relationship between controllers and control algorithms, the controlled system processes and controlactions/feedback through actuators and sensors. Potentially, it may enable practitioners to slice through the quagmire of controls and associations found in reams of tables to identify threatening conditions21.There is a similar problem in the cloud dimension, current risk approaches do not necessarily slice through the complexity and abstractions from both Cloud Service Provider and public cloud customer perspectives under the ‘shared security responsibility’.To that end, a layered model was proposed at the PETRAS IoT Conference8. The approach presented a way to establish customer-focused risk for IoT backends through

60


PA N I C

NOT! Dr Daniel Dresner

(Aged 17… or thereabouts)

I’d like to be a hitch-hiker Across the Milky Way; The improbability probably means I could be there some day. I’ll know where my towel will be My subetha electronic thumb too; And if you can mix a gargle blaster Then I’ll stick with you. Space is big, really big, The chemist’s may seem far, But nothing like the thumb shaking It takes to get to the next star. So keep cool and don’t panic, Expect the unexpected; Wave your thumb to the starry sky And you’re sure to be collected. 61


EVENTS

AofEvents

2018

lmanac

SEPT 19

Securing the Law Firm 2018 America Square London

25 - 26

UK Health Show ExCeL London

SEPT-DEC UK//Dublin//USA//Australia//Japan//Brazil//China//Italy//Poland// Canada//Ireland//Argentina//Philippines//Romania//Austria//Colombia// Luxembourg//Taiwan//Germany//Nigeria

OCT

Nov

DEC

2-3

5-8

4

Cyber Security for Critical Assets Europe London UK

VMworld Europe Barcelona

15

IP Expo Europe ExCeL London

Cyber Security Summit & Expo Business Design Centre London

3-4

20 - 21

10 - 12

28 - 29

3-4

Cyber Security Europe ExCeL London

Data Protection World Forum ExCeL London

NISC The Westerwood Hotel Glasgow

CSP:2018 York Racecourse York

62

IoTSF Conference The IET Savoy Place London

10 - 13

World Congress on Internet Security (WorldCIS) Cambridge UK



01347 812100 cyber talk@softbox.co.uk www.softbox.co.uk/cyber talk cybertalkmagazine

@CyberTalkUK

...hello robot...


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.