5 minute read
Harvesting AI for the good of cyber security in The UK
Harvesting AI for the good of security in The UK
Cyber attacks have become a daily occurrence, almost expected in today’s risk-aware society, but what of the future? Tim Williams believes that we will be engaged in cyber warfare for the rest of our lives and the eventual outcomes will all come down to AI
Who will be able to act the fastest? Who will form the most productive, behind-the-scenes and crossborder strategic alliances in order to facilitate an effective human impact on forward thinking to counter future cyber attacks? By maximising the application of artificial intelligence (AI), who can counter cyber terrorism by building the best mousetrap?
Living with AI and facing facts
How do we define terrorism? It has to be considered in context with people’s lives: one man’s freedom fighter is another man’s terrorist; cyber terrorism is just one facet of attacks on persons or facilities that are perceived as aggressive and potentially terrorism.
As technology becomes ever more efficient and refined, AI will be helpful to identify millions of attempts to attack our networks - but what of the negative effect? AI has been in use to some degree for decades, starting with the missile and anti-missile programmes in the 1950s and ‘60s. In the 1970s, the Yom Kippur war between Israel and its Arab neighbours eventually resulted in Israel directing groups of some of the earliest drones towards enemy firepower, drawing out their location so Israeli bombers could subsequently take out the opposition.
This trend will continue ad infinitum - attackers targeting specific military installations, law enforcement or intelligence communities. AI could conceivably, and will over time, be able to identify a specific person, in a specific crowd, through facial recognition - that is the challenge we have to contend with. We run the very real risk of creating a dangerous obsession with the physical aspects of cyber security – but the very fast evolution of AI means we need to be prepared to operate in a whole new dimension, because ironically our AI vulnerabilities are also going to be addressed through artificial intelligence.
AI is already helping us to identify millions of attempts to attack networks or organisations, and minute by minute is becoming more efficient. In fact, it’s a commonly held view that AI will take over many jobs currently undertaken by human workers - such as security or automated positions - although in turn there will also be a growing requirement for data experts and scientists to control and direct AI applications and functions.
Robots can mimic human behaviour, but currently AI is incapable of actually replicating many human traits, including conceptualisation and complex strategic planning or undertaking complex work that requires precise hand-eye coordination. Crucially, AI cannot interact with humans in the same way that other humans do: with empathy, humanhuman connection and compassion. But do we have a handle on AI’s limitations, and its undoubted potential?
It's all about the maths
AI is based on data - ultimately maths. We need to become far better at putting risk analysts and security experts together with the data scientists responsible for influencing development in a much more efficient way. Coders need help to understand how new technology can be maximised for positive use, and, in contrast, for negative purposes. The danger is always that devices we utilise for the benefit of security could also be used against us, expanding the universe of risk.
What about autonomous vehicles that create a whole new level of concern? There is certainly the potential to replace human terrorists and suicide bombers with remote-controlled technology, establishing a whole new threat level for world leaders.
An Australian mining group has recently unveiled what has been described as the world’s largest robot - a fully-automated rail network featuring a series of trains able to run completely free from human intervention, each making a round trip of around 800km in around 40 hours, including loading and delivering cargo.
Balance against this the fact that in 2018 a runaway train in the same country travelled over 90km without its driver before being forcibly derailed, which contributed to the vehicle’s owner suffering multi-million pound costs, but thankfully no harm to life. That was an accident - imagine how devastating a calculated cyber attack could be, if any kind of autonomous vehicle were hacked.
It's not rocket science
To effectively manage the human impact security experts will have to work hand in hand with data scientists who are aiming to put together the relevant neuro networks to maximise the opportunities of AI whilst mitigating against the dangers.
The ability to take into account safety, security and environmental considerations will require someone (or several someones) who understand the technology and how it’s developed. We need people who can work together and work across boundaries quickly and efficiently, causing a change in the way we structure not only security but allied functions in order to become more agile and responsive.
Dangerous liaisons: Cross-border co-operation
Successful mitigation of risk will come down to who does the best job at getting the right coders, working with the right data scientists and the right security experts in order to develop the right product - and, of course, the amount of investment in such projects. We’ve seen over the past few decades that nation states are willing to co-operate in sharing technologies and vulnerabilities in order to pool resources. The whole in this case is often better than the sum of its parts.
At home, the UK government has recently announced a new £110 million programme of Masters courses in AI, coupled with work-based placements backed by both government and industry investment. In addition, UK troops on the front lines are to be supported by palmsized drones developed by the Ministry of Defence, with around 200 miniature drones deployed on the battlefield to take over the life-threatening surveillance and reconnaissance duties currently undertaken by soldiers.
The MoD’s investment in robotic systems is estimated at around £66 million ($87 million). This is great news for armed forces, but could be viewed as a drop in the ocean when it comes to the long-term investment required in AI technology.
We're in it together - is AI there with us?
We need to do better: speed up the turnaround of relevant intel and information; bring together the right teams of people, internationally as necessary to harness the relevant skills; be prepared for change at a pace never seen before. The complexity of the challenge is increasing day by day, but so is the positive potential of AI that is just waiting to be harnessed.
There’s now a wheelchair, powered by AI that uses facial expressions to guide the chair’s movement, that will revolutionise mobility for disabled wheelchair users. This is a fascinating time to be alive and I, for one, can’t wait to see what’s in store for us.
The ability to use and build on AI for the benefit of humankind far outweighs the risks associated with AI - but only if we anticipate and harness the risk and danger and incorporate these threats into our ongoing AI strategy. We need to defend the use of bad AI with good AI - that is our future.
Further information
www.pinkerton.com