6 minute read

A NEW AGE OF CYBERSECURITY

Next Article
PRODUCTS

PRODUCTS

BRIAN PINNOCK, CYBERSECURITY EXPERT AT MIMECAST, WRITES AI CAN BE A POWERFUL TOOL IN MIDDLE EAST’S CYBER DEFENCES

Technologies such as artificial intelligence (AI), machine learning, the Internet of Things and quantum computing are expected to unlock unprecedented levels of computing power.

Advertisement

These so-called Fourth Industrial Revolution (4IR) technologies will power the future economy and bring new levels of efficiency and automation to businesses and consumers.

AI in particular holds enormous promise for organisations battling a scourge of cyberattacks. Over the past few years cyberattacks have been growing in volume and sophistication.

The latest data from Mimecast’s State of Email Security 2022 report found that 90% of companies in Saudi Arabia, and 94% in the UAE, were the target of an email-related phishing attack in the past year. Three-quarters (76%) of UAE companies also fell victim to a ransomware attack, with six in ten organisations in Saudi Arabia reporting the same.

AI adoption gaining ground

To protect against such attacks, companies are increasingly looking to unlock the benefits of new technologies. The market for AI tools for cybersecurity alone is expected to grow by $19-billion between 2021 and 2025.

Adoption of AI as a cyber resilience tool is growing among companies in the Middle East. While a third (34%) of companies in Saudi Arabia currently make use of AI or machine learning (or a combination of both), half of the survey respondents said they have plans to adopt AI in the next year.

In the UAE, AI adoption is greater, with nearly half (46%) of organisations already using a combination of AI and machine learning.

The UAE has also formalised its intention to become a global leader in AI by 2031 with its National Strategy for Artificial Intelligence. This strategy sets out an ambitious path for how the UAE will leverage AI and increase the country’s competitiveness in priority sectors through deployment of AI. In addition, the strategy seeks to see the adoption of AI across government services with the aim of improving lives, and for the country to become a fertile ecosystem for AI and its applications.

However, it’s going to be essential that cybersecurity is a key consideration in this strategy, both in terms of securing AI solutions and also ensuring that cybersecurity providers harness the power of AI to support and improve their technologies.

But is AI a silver bullet for cybersecurity professionals seeking support with protecting their organisations?

AI use cases growing

AI should be an essential component of any organisation’s cybersecurity strategy. But it’s not an answer to every cybersecurity challenge - at least not yet. The same efficiency and automation gains that organisations can get from AI are available to threat actors too. AI is a double-edged sword that can aid organisations and the criminals attempting to breach their defences.

Used well, however, AI is a gamechanger for cybersecurity. With the correct support from security teams, AI tools can be trained to help identify sophisticated phishing and social engineering attacks, and defend against the emerging threat of deepfake technology.

In recent times, AI has made significant advances in analysing video and audio to identify irregularities more quickly than humans are able to. For example, AI could help combat the rise in deepfake threats by quickly comparing a video or audio message against existing known original footage to detect whether the message was generated by combining and manipulating a number of splicedtogether clips.

AI may be susceptible to subversion by attackers, a drawback of the technology that security professionals need to remain vigilant to. Since AI systems are designed to automatically ‘learn’ and adapt to changes in an organisation’s threat landscape, attackers may employ novel tactics to manipulate the algorithm, which can undermine its ability to help protect against attack.

A shield against threat actors’ tracking

A standout use of AI is its ability to shield users against location and activity tracking. Trackers are usually adopted by marketers to refine how they target their customers. But unfortunately threat actors also use them for nefarious purposes.

They employ trackers that are embedded in emails or other software and reveal the user’s IP address, location, and engagement levels with email content, as well as the device’s operating system and the version of the browser they are using.

By combining this data with user data gained from data breaches - for example a data breach at a credit union or government department where personal information about the user was leaked - threat actors can develop hugely convincing attacks that could trick even the most cyber aware users.

Tools such as Mimecast’s CyberGraph can protect users by limiting threat actors’ intelligence gathering. The tool replaces trackers with proxies that shield a user’s location and engagement levels. This keeps attackers from understanding whether they are targeting the correct user, and limits their ability to gather essential information that is later used in complex social engineering attacks.

For example, a criminal may want to break through the cyber defences of a financial institution. They send out an initial random email to an employee with no content, simply to confirm that they’re targeting the correct person and what their location is. The user doesn’t think much of it and deletes the email. However, if that person is traveling for work for example, the cybercriminal would see their destination and could then adapt their attack by mentioning the location to create the impression of authenticity.

Similar attacks could target hybrid workers, since many employees these days spend a lot of time away from the office. If a criminal can glean information from the trackers they deploy, they could develop highly convincing social engineering attacks that could trick employees into unsafe actions. AI tools provide muchneeded defence against this form of exploitation.

Security awareness needs to remain a priority

Despite AI’s power and potential, it is still vitally important that every employee within the organisation is trained to identify and avoid potential cyber risks.

Nine out of every ten successful breaches involve some form of human error. In the latest State of Email Security 2022 report, nearly nine in ten (88%) respondents from UAE – and 72% of respondents from Saudi Arabia - believe their company is at risk from inadvertent data leaks by careless or negligent employees.

AI solutions can guide users by warning them of email addresses that could potentially be suspicious, based on factors like whether anyone in the organisation has ever engaged with the sender or if the domain is newly created. This helps employees make an informed decision on whether to act on an email.

But because it relies on data and is not completely fool proof, regular, effective cyber awareness training is needed to empower employees with knowledge and insight into common attack types, helping them identify potential threats, avoid risky behaviour and report suspicious messages to prevent other end-users from falling victim to similar attacks.

Encouragingly, 44% of companies in Saudi Arabia, and 46% of companies in UAE provide ongoing cyber awareness training, well ahead of a global average of 23%.

To ensure AI - and every other cybersecurity tool - delivers on its promise to increase the organisation’s cyber resilience, companies should continue to prioritise regular and ongoing cyber awareness training and ensure that people remain part of the solution.

This article is from: