4 minute read

IMPROVE CYBER SECURITY WITH DEEP LEARNING AND SEMANTIC ANALYSIS

text: Pertti Jalasvirta, Partner, Cyberwatch Finland

The challenges in the cyber domain, in terms of security threats and their management, increase day by day. The cyber domain is fraught with state-backed and non-state actors which try to exploit the vulnerabilities and loopholes in the system. It becomes increasingly important to understand and tackle the current threat scenarios so that we can prepare for the future. The way forward, as the industry sees it today, is through the harnessing of terabytes of data and all the digital information that is generated every second across the cyberworld, and eventually preparing automated defense strategies against bad actors. It is impossible to understand and collect such a large amount of data manually. However, artificial intelligence-based technologies are evolving rapidly and are becoming an important tool for data processing and analysis.

Advertisement

Over the last decade or so, the development of artificial intelligence, machine learning, and data science - terms that are used almost synonymously- have caused a paradigm shift in the tech industry and their success has depended on how they have been able to take advantage of digital information using the available paced computers.

Among the many different techniques that have become a part of the ‘AI toolbox’, deep learning and Natural Language Processing (NLP) appear to be two major success stories. For instance, the use of deep learning and NLP is evident in the efficient detection of anomalies in HTTP requests and responses. This application demonstrates how the defense of communication networks can be strengthened against malicious traffic. A similar and practical example comes from Domenic Puzio, who during his career at Capital One developed a system which scaled up regular corporate traffic in real time and sent out alarm signals that can be analysed by cyber security analysts.

The use of AI in the analysis of cyber threats in smaller capacities has been present for more than a decade.

Data from Internet traffic and security logs are usually multidimensional, and for such scenarios dimensionality reduction and clustering have been part of the more traditional AI methods. A technique called Principal Component Analysis (PCA) is used to reduce the effective number of dimensions or facets in the data and provide a parsimonious description. It can be used to efficiently create records of a large number of security incidents and subsequently identify latent or dormant threats. The resulting analysis is able to clump together or categorize similar patterns of attacks. The automatic categorization becomes useful in risk management whereby fast identification of security lapses and threat actors are possible. Interestingly, methods such as PCA can be used simultaneously with sophisticated methods like deep learning given the high complexity of the data available in practice and the wide variety of input data. However, deep learning algorithms are, by design, expected to work on data that comes with volume, variety and complexity and in principle, can analyse all the available data and explore the raw data files without being explicitly asked to consider hand-engineered features. For its application in cyber security, deep learning models are able to combine data from a variety of input sources.

UNDERSTANDING THE OVERALL THREAT SCENARIO

Apart from the issues related to the classification of threats and detection of anomalous traffic, questions arise relating to the understanding of the overall cyber threat scenario, how that evolves in time and how it is fuelled by different state sponsored actors. These questions are far from easy to answer and require incorporating complex information on world politics, technological trends, economic and military interests and conflicts. This brings in the need for the processing of textual data which is precisely tackled by NLP based methods. The challenge and utility for NLP is to convert textual data into structured numerical data that can be fed to algorithms like deep learning. The approaches in NLP vary between the processing of short and long ranged semantics of written texts, like news pieces or social media posts. A culmination of the above methods can be found in artificial threat intelligence systems in the form of ‘Knowledge Graphs’ where textual data from the cyber domain is converted into a web of entities and their interrelationships. The latter then powers a deep learning algorithm to make predictions about vulnerabilities and attacks.

Given the many different techniques available to us, it finally becomes an association of AI, digital sources of data, and human analysts that act together to extract valuable knowledge, building predictive models of the future and designing defense strategies. While there may not be a real silver bullet, ideally there are tools and models that are robust across diverse domains, and whose predictions can be interpreted via human wisdom.

As the amount of data, threats, technology solutions, and alternative tools grow, we should be able to make the right choices.

If we now focus on solving current problems in light of today’s knowledge and skills, would we be able to solve future problems without fear that something was missing?

Pertti Jalasvirta, Partner, Cyberwatch Finland

Pertti is a founding member and partner of Cyberwatch Oy. He is a true multidisciplinary expert. His thirty years of international work history is a testament to a solid expertise in public administration, government work, administration, safety, training, and medical technology. In addition, Pertti has solid experience in the CBRN Security Unit and has experience in designing and implementing multinational security exercises and trainings.

This article is from: