IN THE ERA OF THE LLMS
With the focus on LLM (Large Language Models) likely to escalate in the foreseeable time ahead, enterprises need to invest and harness the significant potential that LLMs can bring to their businesses, in terms of innovation and efficiency. At the same instance, cost vis-a-vis benefits, ethical implications, etc will also need consideration.
Enterprises can gain a lot by investing in multimodal LLMs in areas across customer service interactions, sentiment analysis to content generation and many more. Enterprises could choose to develop a customized LLM in-house but that is a significant investment in terms of time, money, and resources. While the capabilities of existing, pre-trained LLMs models can applicable to many use cases, these can be then fine-tuned with training on relevant data, suited to the industry and the company. Selecting the right foundational LLM for your company would need a lot of consideration based on your requirements and also the capabilities of the LLM in terms of the data it has been trained on. Security is also an important consideration with open-source LLMs that you may opt for. While investing in an LLM cost, speed and accuracy could be key considerations on the road ahead. As the requirements of the organizations evolve along the way and new data is available for further training, further fine-tuning would be required.
There is some apprehension about what is referred to as the emergent capabilities of these LLMs. These capabilities refer to sudden and unpredictable changes that these LLMs can likely exhibit as they scale up. This could raise concerns about potential risky capabilities that the LLMs could develop over time with enhanced computing resources, new training data etc. There is an ongoing focus on developing metrics that can predict such jumps in capabilities.
RAMAN NARAYAN
Co-Founder & Editor in Chief narayan@leapmediallc.com Mob: +971-55-7802403
Sunil Kumar Designer
R. Narayan Editor in Chief, CXO DXSAUMYADEEP HALDER
Co-Founder & MD saumyadeep@leapmediallc.com Mob: +971-54-4458401
Nihal Shetty Webmaster
MALLIKA REGO
Co-Founder & Director Client Solutions mallika@leapmediallc.com Mob: +971-50-2489676
20 » FORTIFYING MOBILE APPLICATIONS
Subho Halder, Co-founder and CTO of Appknox discusses the transforming landscape of mobile application security
22 » BREACHING NEW AI FRONTIERS
Ramprakash Ramamoorthy, Director of AI Research at ManageEngine discusses ManageEngine’s focus on enabling digital maturity and the focus on developing LLMs
24 » UNLOCKING VALUE
14 » THE PRIMACY OF DATA BACKUP
Data backup is paramount and serves as the cornerstone of data protection and business continuity
12 » GITEX AFRICA 2024 EDITION BRINGS AI OPPORTUNITIES INTO FOCUS
AI Everything Expo by GITEX AFRICA 2024 to highlight the continent’s focus on AI Opportunities across finance and agriculture, to healthcare and mobility, are fuelling a booming AI market
Vijay Jaswal, Chief Technology Officer of APJ&MEA, IFS discusses the company’s commitment to helping organizations resolve their productivity, predictability and agility issues with their solutions
26 » STEADY EXPANSION
Aigerim Baktybekkyzy, Channel Manager at May Cyber Technology discusses the company’s focus on international expansion and developing next-generation SIEM and UABA solutions
27 » SECURING IDENTITIES
Tarun Srivastava, Technical Account Manager - India & South East Asia, Nexus discusses the range of PKI solutions from Nexus
18 » DELIVERING AI-DRIVEN INSIGHTS
Mohammed Al-Moneer, Senior Regional Director, META at Infoblox shares his views on SOC Insights, a new launch from Infoblox
28 » TOWARDS EFFECTIVE THREAT MITIGATION
Jonathan Trull, CISO at Qualys discusses the evolving threat landscape and approaches to cybersecurity
30 » AUTOMATION – THE KEY TO DATA PROTECTION IN THE CLOUD
Maher Jadallah, Senior Director Middle East & North Africa at Tenable writes that automation holds the key to data protection in cloud environments
32 » HOW GENERATIVE AI ACCELERATES DIGITAL TRANSFORMATION
Lori MacVittie, F5 Distinguished Engineer says the catalytic nature of generative AI generates significant impact, usually when it accelerates an existing trend
CISCO UNVEILS CYBERSECURITY READINESS INDEX
Study Reveals 91% of UAE Companies Use AI Technologies in their Cybersecurity Strategies
Managing Director, Cybersecurity, MEA, CiscoThe recent study conducted by Cisco highlights a significant surge in the use of AI technologies in cybersecurity strategies among UAE organizations. The study
indicates that 91% of companies surveyed are integrating AI in their security defenses, mainly in threat detection, response, and recovery.
The 2024 Cisco Cybersecurity Readiness Index was developed in an era defined by hyperconnectivity and a rapidly evolving threat landscape. Despite continuing to be targeted with a variety of techniques that range from phishing and ransomware to supply chain and social engineering attacks, companies today are actively attempting to fortify their defense. And while they are building defenses against these attacks, the complexity of their security postures, dominated by multiple point solutions, presents a challenge in effectively thwarting these threats.
These challenges are compounded in today’s distributed working environments where data can be spread across limitless services, devices, applications, and
users. However, 87% of companies still feel moderately to very confident in their ability to defend against a cyberattack with their current infrastructure. The Index assesses the readiness of companies on five key pillars: Identity Intelligence, Network Resilience, Machine Trustworthiness, Cloud Reinforcement, and AI Fortification, which are comprised of 31 corresponding solutions and capabilities.
"As our digital landscape continues to evolve rapidly, the importance of proactive cybersecurity measures cannot be overstated," said Fady Younes, Managing Director for Cybersecurity at Cisco in the Middle East and Africa. "It is essential that organizations prioritize cybersecurity investments and embrace innovative solutions to effectively mitigate risks. By fostering a culture of cyber resilience, UAE organizations can navigate the digital landscape with confidence, while safeguarding their operations against emerging threats."
BARRACUDA REPORT CLAIMS SIX IN 10 BUSINESSES STRUGGLE TO MANAGE CYBER RISK
Just 43% of organisations surveyed have confidence in their ability to address cyber risk, vulnerabilities, and attacks
Barracuda Networks has published the CIO report: Leading your business through cyber risk, which explores the top governance challenges facing companies trying to manage cyber risk and boost their cyber resilience. The report offers practical tools such as a checklist template, created with Barracuda’s own IT and security leadership.
Leveraging data from the international Cybernomics 101 study, the report assesses how challenges relating to security policies, management support, third-party access, and supply chains can undermine a company’s ability to withstand and respond to cyberattacks. Among other things, the findings show that many organisations find it hard to implement company-wide security policies such as authentication measures and access controls. Half (49%) of the smaller to
mid-sized companies surveyed listed this as one of their top two governance challenges. Further, just over a third (35%) of the smaller companies worry that senior management doesn’t see cyberattacks as a significant risk, while the larger companies are most likely to struggle with a lack of budget (38%) and skilled professionals (35%).
“For many businesses today, a security incident of some kind is almost inevitable,” said Siroui Mushegian, CIO of Barracuda Networks. "What matters is how you prepare for, withstand, respond to, and recover from the incident. This is cyber resilience. Advanced, defensein-depth security solutions will take you most of the way there, but success also depends on security governance — the policies and programs, leadership, and more that enable you to manage risk.”
NETAPP
PARTNERS WITH GOOGLE CLOUD
Expanded partnership make the cloud more secure and simple for key initiatives such as generative AI
NetApp announced an expansion of its partnership with Google Cloud to make it easier for organizations to leverage their data for generative AI (GenAI) and other hybrid cloud workloads. NetApp and Google Cloud are announcing the Flex service level for Google Cloud NetApp Volumes which supports storage volumes of nearly any size. NetApp is also releasing a preview of its GenAI toolkit reference architecture for retrieval-augmented generation (RAG) operations using Google Cloud Vertex AI platform.
Google Cloud and NetApp are announcing a new service level for NetApp Volumes called Flex that gives customers more granular control to adapt their storage and performance to match the exact needs of their cloud workloads.
“Increasing demand for data-intensive applications and insights has reinforced the need for a new approach to unified data storage that gives organizations the agil-
ity to move and store data wherever it is needed at any point in time,” said Pravjit Tiwana, Senior Vice President and General Manager, Cloud Storage at NetApp. “By extending our collaboration with Google Cloud, we’re delivering a flexible form factor that can be run on existing infrastructure across Google Cloud system without any tradeoffs to enterprise data management capabilities.”
With the addition of Flex, NetApp Volumes customers can choose from four service levels to leverage a fully managed file service built on NetApp ONTAPTM and operated by Google Cloud.The Flex service level will be generally available by Q2 2024 across 15 Google Cloud regions, expanding to the other Google Cloud regions by the end of 2024.
NetApp is also releasing a preview of its GenAI toolkit with support for NetApp Volumes. This offering, along with the accompanying reference architecture,
speeds the implementation of RAG operations while enabling secure, consistent, and automated workflows that securely connect data stored in NetApp Volumes with Google Cloud Vertex AI platform.
AI, SECURITY, AND SUSTAINABILITY DRIVING IT MODERNIZATION IN HEALTHCARE
Healthcare organizations now outpace other industries in adoption of multiple IT models
Nutanix, a leader in hybrid multicloud computing, announced the findings of its sixth annual global Healthcare Enterprise Cloud Index (ECI) survey and research report, which measures enterprise progress with cloud adoption in the industry. The research showed that hybrid multicloud adoption is surging among healthcare organizations as the majority are significantly increasing investments in IT modernization.
This year’s Healthcare ECI report revealed that the use of hybrid multicloud models in healthcare is forecasted to double over the next one to three years. IT decision-makers at healthcare organizations are facing new pressures to modernize IT infrastructures to effectively harness the power of AI, mitigate security risks, and be more sustainable.
Healthcare organizations handle large amounts of personal health information (PHI) that can be complex to manage with the need to remain compliant with regulations like the Health Insurance Portability and Accountability Act (HIPAA). As organizations in all industries continue to grapple with the complexities of moving applications and data across environments, hybrid multicloud solutions provide key benefits to healthcare organizations including helping them simplify operations, deliver better patient outcomes, and improve clinician productivity. The Healthcare ECI report found the adoption of the hybrid multicloud operating model in healthcare organizations has increased by 10 percentage points compared to last year, jumping from 6% to 16%.
GBM GETS CREST & DESC CYBERSECURITY ACCREDITATIONS
The recognitions endorse GBM as a leading local partner in delivering cutting-edge cybersecurity solutions
Gulf Business Machines (GBM), an endto-end leading digital solutions provider, has been awarded two new accreditations from CREST and DESC (Dubai Electronic Security Center) in Penetration Testing and Incident Response.
GBM's achievement of these certifications showcases its leadership and expertise in providing state-of-the-art cybersecurity solutions, reflecting the company’s commitment to maintaining local roots while adhering to global standards. In an era marked by escalating sophisticated cyber threats, public and private sector institutions across the region are increasingly turning to accredited cybersecurity solutions providers who meet the stringent standards set by CREST and DESC.
CREST is an international not-for-profit membership body dedicated to creating
a secure digital world by establishing capability, capacity, consistency and collaboration within the global cybersecurity industry. Formed in 2014, the Dubai Electronic Security Center (DESC) is a regulatory authority overseeing the cybersecurity framework and promoting electronic security initiatives in Dubai.
Ossama El Samadoni, General Manager at GBM Dubai commented, “In today's rapidly evolving cyber landscape, our internationally recognized accreditations serve as more than just validations of our expertise. They stand as pillars reinforcing our customers' cybersecurity defenses, granting them unparalleled peace of mind. By adhering to these rigorous standards, we assure our clients that their digital assets are safeguarded with the utmost care. Our dedication to providing top-notch cybersecurity solutions
DELINEA ANNOUNCES AVAILABILITY OF NEW DUBAI DATA CENTER BUILT ON MICROSOFT AZURE
The new Azure Data Center will contribute towards the growth of the PAM market
Delinea, a leading provider of solutions that seamlessly extend Privileged Access Management (PAM), announced its expansion in Dubai with a new Azure Data Center. The new facility offers customers in the Middle East increased deployment options to better serve their compliance needs with the company's enhanced cloud infrastructure built on Azure.
"We are on a mission to revolutionize identity security and privileged access, and we want all our customers to reap the benefits," said Mohammad Ismail, Vice President, MEA at Delinea. "The new Azure Data Center gives customers in the Middle East. the peace of mind that their cybersecurity investments with Delinea are future-proofed, thanks to a cloud-native infrastructure, hundreds of integrations and a contractually guaranteed 99.99% uptime. Delinea continues to drive local investment to support regional initiatives like Saudi Vision 2030 and the UAE’s ‘We the UAE 2031’ vision."
At GISEC 2024, along with unveiling the availability of Delinea’s new Azure Data Center, the company also presented the latest innovations introduced on the Delinea Platform, including,
Privilege Control for Server which will help local partners and customers better manage privileged access to Windows, Linux and Unix servers to limit standing privileges and shared credential sprawl and offer greater control mechanisms over user access. It also showcased the recent acquisitions and integration of the Authomize and Fastpath capabilities.
AMIVIZ JOINS FORCES WITH ABSTRACT SECURITY TO REVOLUTIONIZE CYBERSECURITY ANALYTICS
The partnership will help enterprises with proactive threat management solutions beyond traditional SIEM systems
AmiViz, the first B2B enterprise marketplace for the cybersecurity industry in the Middle East, has forged a partnership with Abstract Security, a cyber threat operations platform offering a revolutionary approach to security analytics that allows organisations to improve efficiency, reduce SIEM related storage costs, and enhance detection and response capabilities across multi-cloud and on-premise environments.
The Abstract platform disrupts traditional cybersecurity analytics with its innovative approach, challenging the limitations of conventional Security Analytics systems. Abstract Security offers a transformative cyber threat operations platform in an era marked by compliance-induced data swamps and redundant data storage.
Engineered to streamline security analytics, it enhances detection and response capabilities across diverse IT environments,
MANAGEENGINE
including multi-cloud and on-premise setups. By integrating tactical artificial intelligence (AI), Abstract empowers security analysts to decode complex cloud security data, improving detection strategies and filling visibility gaps. Pioneering initiatives like the decentralized edge computing platform and a one-click data lake further solidify Abstract Security's position as a visionary player in cybersecurity.
The strategic expansion into Middle Eastern markets aligns with the region's growing demand for advanced cybersecurity measures.
Ilyas Mohammed, COO at AmiViz, said, “Our partnership with Abstract Security heralds a new era in cybersecurity analytics. By leveraging their innovative solutions, we empower our clients with proactive threat management capabilities that surpass traditional systems. Together, we redefine industry standards, ensuring
SIMPLIFIES CLOUD
robust protection against evolving cyber threats and bolstering our position as leaders in the cybersecurity landscape.”
COST MANAGEMENT
Reduces redundant cloud resources and mitigates the risk of costly cloud sprawl
ManageEngine, a division of Zoho Corp. and leading provider of enterprise IT management solutions, announced that CloudSpend, its cloud cost management tool, has extended its support to Google Cloud Platform (GCP). By expanding its cost optimization services to GCP, following AWS and Microsoft Azure, CloudSpend now covers the three largest public cloud computing platforms worldwide by market share, enabling enterprises with a multi-cloud setup to streamline their operational expenses.
Furthermore, CloudSpend has been enhanced with a cost anomaly detection capability, which helps monitor and manage cost blind spots across multiple cloud service providers. The company will showcase CloudSpend's GCP features at Google Next 2024 on April 9 at booth #1660. Organizations are keen on embracing not
Srinivasa Raghavan Director of Product management, ManageEnginejust one but multiple cloud platforms to diversify their business operations, split workloads, ensure cost-cutting and eventually enjoy success. Consequentially, investments into public cloud adoption are expected to reach $679 billion in 2024,
entailing an efficient cloud cost observability and optimization platform to maintain cloud finances.
"ManageEngine estimates that organizations operating in a multi-cloud environment often under-utilize their cloud resources, with their cloud usage topping at 55%. At ManageEngine, we believe that CloudSpend will help businesses rise above such challenges and advance in their cloud maturity journey by offering the visibility to stay on top of their investments. This multi-cloud cost intelligence platform follows FinOps best practices and is boosted with smart forecasting features that help reduce operational expenditure. It also bridges the gap between capacity planning and cost optimization for resources running in multi-cloud setups," said Srinivasa Raghavan, director of product management at ManageEngine.
CLOUDFLARE ENTERS OBSERVABILITY MARKET WITH BASELIME ACQUISITION
The acquisition of Baselime will enhance observability tools within Cloudflare's developer platform to keep pace with the evolving needs of modern web applications
Matthew Prince Co-Founder & CEO, CloudflareCloudflare announced an entrance into the observability market with the acquisition
of Baselime, the cloud-native observability platform. By integrating Baselime’s technology with Cloudflare’s developer platform, Cloudflare will be uniquely positioned to bring deep knowledge of serverless platforms and developer experience together to solve the challenges of observability for serverless apps.
Today, entire applications are built on serverless architectures, from compute to databases, storage, queues, and more. Still, observability is often regarded as one of the weaknesses of serverless architectures — trading off visibility into the application’s behavior for scalable infrastructure that doesn’t need to be managed. Building and debugging production applications requires the ability to understand trends and patterns, identify performance bottlenecks, and isolate errors to ensure ongoing reliability, scalability, and secu-
rity. Having access to this level of visibility, preferably all within one platform, is a critical factor developers consider when choosing a platform on which to build.
“Two million developers building on Cloudflare trust us to help scale their apps globally, but can still struggle to understand the behavior of their cloud applications,” said Matthew Prince, co-founder and CEO, Cloudflare. “We believe that to be the leading developer platform, having the best observability tools built in is going to be table-stakes. Baselime has raised the standard for serverless observability and we can further unlock those insights for every developer building on our platform.”
With this acquisition, Baselime will be integrated into Cloudflare’s developer platform, helping developers push the boundaries of modern observability.
SOLARWINDS CELEBRATES TWENTY-FIVE YEARS OF EXCELLENCE IN IT MANAGEMENT AND INNOVATION
New SolarWinds survey provides insight from IT pros on the most transformative technologies and the biggest challenges that lie ahead
SolarWinds is commemorating a quarter-century of producing cutting-edge solutions that deliver value and reduce complexity for enterprises, wherever they are on their digital transformation journeys. Founded in 1999 with a vision to simplify IT management, SolarWinds solutions continue to provide organizations worldwide—regardless of type, size, or complexity—with tools to help IT teams take a proactive approach toward growth, productivity, and innovation. Today, the SolarWinds Platform unifies observability, database, and service management solutions to help enterprises optimize performance, ensure reliability, and enhance security within hybrid and multi-cloud environments.
"The past 25 years have witnessed an extraordinary evolution in the needs of IT professionals, and we are immensely
Sudhakar Ramakrishna President & CEO, SolarWindsproud to reach this incredible milestone in our history," said Sudhakar Ramakrishna, president and CEO of SolarWinds. “We started in 1999 with a simple mission to make the lives of IT teams easier. Today, SolarWinds proudly serves over 300,000 customers around the globe, and our solutions help accelerate our customers’ business transformations.”
To mark the seismic shifts in the enterprise technology landscape over the past quarter-century, SolarWinds released the SolarWinds IT Evolution Survey. The report includes insights from a diverse group of technology professionals—including more than 37% who have 25 years or more experience in IT—about the most impactful industry advancements since 1999 and the most formidable challenges tech pros will face in the future.
D-LINK CELEBRATES PARTNERS AT REGIONAL DISTRIBUTOR MEET 2024
The vendor shared its roadmap with partners and presented awards to top performing partners
D-Link Middle East & Africa recently organized its Distributor Meet 2024 in Bali, Indonesia, bringing together the crème de la crème of its distributors from across the region. The event was aimed at promoting networking, sharing information, discussing market trends, and providing valuable feedback.
D-Link leaders gave insightful presentations and shared the roadmap for the future while also recognizing its distributor partners for commitment, loyalty, and enhanced business growth.
The event was held at The Anvaya Beach Resort Bali, which is known for its luxurious accommodations, stunning ocean views, and top-notch amenities. D-Link’s top distributors were treated to a welcome dinner at Seasand Restaurant upon their arrival, which set the tone for the rest of the event. The highlight of the event was the Excellence & Recognition awards ceremony, which celebrated the outstanding performance of the distributors, and recognized their remarkable contributions and collaborations with D-Link for over 30 years.
Sakkeer Hussain, Director of Sales and Marketing, D-Link Middle East & Africa, said, “We are thrilled with the success of our Distributor Meet 2024 in Bali, Indonesia. It was a wonderful op-
portunity to bring together our distributors from across the region and celebrate their achievements. We are grateful for their continued support, dedication, and commitment to our shared goals.
“The power of collaborations was evident throughout the event, and we are excited to see the positive impact it will have on our business in the future. As we move forward, we remain committed to building strong partnerships and driving mutual success.”
DATAIKU DELIVERED 413% ROI IN 2024 TOTAL ECONOMIC IMPACT STUDY
Through Dataiku's Everyday AI platform, organizations increased user productivity, reduced costs, improved decision making, and quickly drove innovation with Generative AI use cases
Claire Gubian Global VP, Business Transformation, Dataikuthat a composite organization achieved a 413% ROI on the initial investment over three years (a $23.50M net present value) and considerably increased user productivity and decision-making.
To construct the study, Forrester interviewed decision makers and four Dataiku customers across as many industries to obtain data about costs, benefits, and risks. These decision makers were in need of a shared platform to empower business and data users.
“We believe this study confirms what we hear from our customers every day — Dataiku is bringing concrete value as the premier solution for organizations looking to optimize the power of modern analytics and AI,” said Claire Gubian, Global VP of Business Transformation, Dataiku. “We
were especially pleased to see interviewees mention how Dataiku enabled them to be more flexible, allowing them to adapt to the Generative AI wave and start experimenting while still maintaining strong enterprise governance standards. This is an issue that’s top of mind today for organizations across all industries.”
The Forrester study found significant quantitative and qualitative benefits to the composite organization from implementing Dataiku.
More than 600 customers get measurable return on AI (ROAI) using Dataiku for diverse use cases, from predictive maintenance and supply chain optimization to quality control in precision engineering, marketing optimization, Generative AI use cases, and everything in between.
GITEX AFRICA 2024 EDITION BRINGS AI OPPORTUNITIES INTO FOCUS
AI Everything Expo by GITEX AFRICA 2024 to highlight the continent’s focus on AI Opportunities across finance and agriculture, to healthcare and mobility, are fuelling a booming AI market
The AI mania that’s transforming business, government and society globally is also igniting waves of innovation across Africa, with the shape-shifting tech’s existential prospects powering a cross-continental investment surge at the AI Everything Expo by GITEX AFRICA in Morocco.
Africa’s epic AI opportunity is already disrupting digital advancements in diverse sectors from finance and agriculture, to healthcare and mobility, all fuelling a booming AI market that, according to analysts Statista, will grow 30 percent annually over the next six years to value US$17 billion by 2030.
This massive AI rush combined with a rapidly growing population of 1.5 billion people – of which 70 percent are under the age of 30 – creates a potent recipe of AI acceleration, but highlights gaps in talent development, venture allocation, policy and infrastructure.
These crucial challenges and opportunities will be addressed when the world’s AI cognoscenti and pivotal power players of its widespread deployment unite to fast-track the continent’s next
big tech shift at the AI Everything Expo by GITEX AFRICA, the year’s largest and most progressive platform for AI exploration and deep tech innovation.
Taking place from 29-31 May 2024 in Marrakech, Africa’s powerhouse tech showcase will feature the world’s tech titans spearheading the AI gold rush, including Microsoft, IBM, Huawei, Nvidia, and Google, along with hundreds of AI ambitious startups from across the globe with grand visions to change Africa via AI-infused products and services.
An AI continent ‘brimming
with investment opportunity’
Microsoft is leading the way in the AI investment race, having forged partnerships with the world’s hottest makers of AI models, including the UAE’s G42, a global leader in visionary AI.
Microsoft’s recent US$1.5 billion strategic investment in G42 to accelerate AI development in growing economies such as Africa will be welcomed by big tech executives, government leaders, investors and tech entrepreneurs alike at GITEX AFRICA 2024, which will also feature Presight, G42’s big data analytics company powered by generative AI.
Lillian Barnard, President of Microsoft Africa, said AI can unlock a continent “brimming with investment opportunity.” “Africa has long been recognised for its formidable growth prospects and AI is the long-awaited key to help unlock that potential,” said Barnard, who will also be a headline speaker at GITEX AFRICA’s power-packed conference programme.
Dr. Adel Alsharji, the COO of Presight, added that Africa is the second-fastest growing region globally in AI adoption. “Africa’s AI journey is gaining momentum, and this progress highlights the continent’s readiness to explore and harness the potential of AI for driving economic growth and addressing local challenges,” said Alsharji, adding that demand for AI-related jobs will increase two-fold over the next three years. “AI could add US$13 trillion to the global economy by 2030, while the number of AI-related jobs in Africa alone is expected to grow by 200 percent by 2025.”
The AI Everything Expo will gather the brightest minds and most innovative thinkers in the field of AI at the AI Everything Conference, one of 10 powerful conference stages at GITEX AFRICA, the continent’s largest tech and start-up show. AI and it’s far-reaching multisectoral impact will be evident on the exhibition floor, with exhibitors showcasing how the AI boom is turbocharging waves of innovation across industries.
AUTHORIZED DISTRIBUTOR: (Digital Signage Solutions)
(Ergonomic Workflow Solutions & Healthcare Products)
(Mesh Router, 5G Hotspot & Range Extender) (Monitors & Accessories) (Hospitality TV, Digital Signage, LED & Monitors)
THE PRIMACY OF DATA BACKUP
Data backup is paramount and serves as the cornerstone of data protection and business
continuity.
The stakes are very high as far as data is concerned, whether personal or corporate. In the Business world, data is invaluable and with the AI and Analytics technologies available these days, it is indeed a prized asset. Besides being a corporate asset that can be analyzed for driving further Business growth, it is also essential that data doesn’t fall into the hands of bad actors such as cybercriminals which could potentially impact the lives of many and the fortunes of organizations.
This makes a robust data backup strategy mandatory for any organization to safeguard against data loss and ensure the backup is available at any point in time to restore original files in the instance of any kind of data disruption. Today, it encompasses data on-premises and data in the cloud and accounts for scenarios such as work from anywhere and remote work arrangements.
Meriam El Ouazzani, Regional Director – META, SentinelOne says, “Traditionally focused on periodic backups to physical media, backup strategies now emphasize continuous data protection, utilizing automated processes and real-time replication to minimize data loss windows, marked by increasing cyber threats, technological advancements, and shifting business requirements. Cloud-based solutions have become prevalent, offering scalability, accessibility, and offsite redundancy.”
According to John Shier Field CTO Threat Intelligence at Sophos, one of the most consequential ways in which backup has changed is that many more companies are now doing it.
“While large enterprise businesses have relied on backups for a variety of business continuity reasons, smaller businesses have also adopted them as a part of their overall risk mitigation strategy,” he adds.
Rob T. Lee, Chief Curriculum Director and Faculty Lead at SANS Institute says, that the strategy of backup has evolved significantly with the advancement of cloud technology and the growing cyber threat landscape.
“Traditional on-premises backup solutions have been supplemented or replaced by cloud-based and hybrid backup solutions, offering greater flexibility, scalability, and reliability. These modern strategies are integral to disaster recovery plans, ensuring that organizations can quickly recover data in the event of a disaster. Training in contemporary cybersecurity practices is vital for integrating these backup solutions effectively into disaster recovery planning.”
3-2-1 approach
The traditional 3-2-1 approach to backup offers a fundamental way to ensure data backup is safe and also to ensure the redundancy of critical data. This is achieved by ensuring three copies of the same data, in two different formats and ensuring they are kept in two different locations, with one copy being kept off-site.
John says, “As a starting point, many organizations use the 3-2-1 rule for backups. This rule prescribes that you should keep three copies of your data, on two different media, with one copy being off-site and offline. This should always be followed by continuous test of backup restoration procedures. Not only to ensure that the process is sound, but that the data are not corrupted in any way, “says John.
He adds, “The mere existence of a reliable backup makes it an integral part of a disaster recovery plan. Whether the disaster is natural or human-made, backups offer a much easier path to recovery than if they aren't available.”
Rob T Lee says that key best practices include adhering to the 3-2-1 backup rule, regular testing of backup and restore processes, encrypting backup data both in transit and at rest as well as utilizing cloud or hybrid cloud solutions for backups.
“Implementing these practices effectively requires proper training and awareness, emphasizing the critical role of cybersecurity education in enhancing backup strategies,” he adds.
Enterprises must look at a combination of best strategies including on-site and off-site backups to protect against data loss, ensure business continuity, and meet compliance requirements.
“For enterprises, key backup strategies entail regular, automated backups of critical data, including encryption for security. Diversification of storage locations through offsite or cloud backups ensures resilience against on-premises failures or disasters. Regular testing validates backup integrity and recoverability while versioning and retention policies maintain multiple copies and adhere to compliance standards. Monitoring systems track backup status and storage utilization, with alerts set for failures. Documentation of procedures and training ensure efficient recovery efforts, while integration into disaster recovery plans establishes clear roles and objectives. Continuous evaluation and refinement of strategies based on evolving needs and threats optimize resilience. These practices collectively fortify enterprises against data loss and ensure business continuity,” says Meriam.
The necessity of backup against ransomware attacks
The growing sophistication and frequency of Ransomware attacks means that organizations need a robust data backup strategy. Regular backups help ensure that you can restore your data to a point before the attack, minimizing downtime and maintaining business continuity.
Charles Smith, Consulting Solution Architect, Data Protection, Barracuda Networks (EMEA) says, “It is quite important to have an up-to-date, readily accessible copy of everything that matters
to your business. Resilient backups allow you to recover more quickly from data damage, disruption, or loss, particularly if a ransomware attack has resulted in encrypted or deleted files. These are well-known and widely reported benefits of backups — but there’s more. Immutable data backups can also protect you from the underrated threats of data tampering and malicious insiders, unpredictable activities that can significantly damage brand trust and reputation if they’re not addressed.”
Organizations are now aware that ransomware attacks lead to data and reputation loss, which in turn leads to loss of customer trust and business opportunities.
According to John, “The ransomware scourge has made the reliance on backups a pressing matter for many organizations. Knowing that a ransomware attack can occur at any time, without warning, and have devastating business outcomes means the more organizations are investing in them.”
Robert agrees with the view that more enterprises are increasingly recognizing the necessity of investing in backup strategies due to the heightened risk of cyber threats like ransomware.
“While concerns about the cost of backup solutions exist, the potential financial and operational impact of data loss or system outages often justifies the investment. There is a continuous need for awareness and education on making cost-effective backup decisions while ensuring data protection,” he adds.
Balancing the cost-benefit ratio
Cost concerns can be a possible deterrent when it comes to back-
Rob T. Lee Chief Curriculum Director and Faculty Lead, SANS Instituteup so there is a need to balance these concerns with a careful evaluation of factors like the criticality of the data, regulatory requirements, risk tolerance, and available technologies.
“For some businesses, the full cost of a comprehensive backup strategy may outweigh the intended benefits. Thankfully there are many options available and at different price points. The decision of choosing a particular strategy is up to individual businesses, but all prospective buyers need to maximize the effectiveness of their chosen strategy against the cost,” says John.
Meriam says that there is a risk of organizations prioritizing other IT initiatives at the cost of effective backup strategies based on cost considerations and hence there is a need to drive more awareness.
Meriam elaborates, “While many enterprises acknowledge the necessity of investing in backup strategies, concerns about the initial cost remain prevalent. Despite the potential financial implications of data loss, some organizations may prioritize other IT initiatives or overlook the significance of comprehensive backup solutions. Therefore, raising more awareness about the importance of backups is essential. By highlighting the potential risks of data loss incidents and emphasizing the cost-effectiveness of proactive data protection measures, businesses can better understand the value proposition of investing in proper backup strategies. Education initiatives can also showcase the diverse range of backup solutions available, catering to different budgets and requirements, allowing enterprises to make informed decisions about safeguarding their critical data.”
John adds, “The are still organizations that are not using backups as part of their recovery mechanisms. It is also essential that staff understand the importance of backups and the consequences of negligence or mistakes.”
Meeting regulatory demands
Regulatory standards are now more stringently enforced to ensure that organizations meet these requirements of compliance with their data strategies. These standards mandate that organizations have data backup strategies in place to ensure data integrity, availability, and security. Non-compliance with these standards can invite legal and financial penalties besides the business risk of losing data, reputation, and business.
Robert says, “Regulatory requirements and compliance standards significantly impact data backup strategies, especially in sectors like healthcare and finance that handle sensitive data. Regulations mandate secure, accessible backups to ensure data integrity and availability. Understanding these regulatory frameworks is crucial for developing compliant backup strategies, underscoring the value of specialized training in data protection laws and regulations.”
The data backup mandated across industries can also vary, depending on how critical the data is such as in sectors like healthcare, finance or government sectors for instance.
John says, “Some industries mandate specific backup strategies. These vary from one industry to another, and businesses need to understand what the regulatory and legal requirements are in their respective sector.”
By aligning backup strategies with regulatory requirements, organizations in healthcare and finance mitigate risks, protect sensitive data, and uphold trust with stakeholders.
Meriam says, “Regulations like HIPAA (Health Insurance Portability and Accountability Act) and PCI DSS (Payment Card Industry Data Security Standard) mandate stringent measures for protecting sensitive information, including robust data backup and recovery procedures. Healthcare and finance organizations must ensure that their backup strategies adhere to specific retention periods, encryption standards, and access controls outlined by these regulations. Non-compliance can result in severe penalties, reputational damage, and legal consequences. Therefore, these industries prioritize comprehensive backup solutions that not only safeguard data integrity but also demonstrate compliance with regulatory mandates.”
Ensuring backup availability
Even with a data backup strategy in place, it is necessary to ensure regular tests are conducted to verify that backups can be restored successfully. This can be done by simulating different recovery scenarios to ensure readiness for various types of failures.
John says, “Regular testing of backups will ensure that the data you expect to be there in an emergency, is available. Recoverability testing will not only test that the recovery process works, but it
Meriam El Ouazzani Regional Director – META, SentinelOnewill also identify any missing data from the backups. Consistently testing backups, and adjusting as needed, will prove invaluable when they are needed for recovery.”
From automating data backup schedules to monitoring and testing the whole data backup process is essential to ensuring your data backup strategy is operationally effective and meeting your business objectives.
As Meriam says, “To automate and ensure consistent data backup, organizations should first select suitable backup tools equipped with automation features and scheduling capabilities. Once chosen, these tools should be configured to establish automated backup schedules aligned with business needs, ensuring regular and unobtrusive backups without manual intervention. Implementing monitoring systems to track backup status and performance, alongside configuring alerts for failures or anomalies, is crucial for prompt intervention. Regular testing and validation of backup integrity and recoverability help maintain reliability. Providing comprehensive training to personnel on backup procedures and automation tools ensures proficient execution. Periodic reviews of backup processes, coupled with adjustments to schedules or configurations as needed, ensure alignment with evolving business requirements.”
Organizations must indeed look at prioritizing and implementing effective backup strategies, to protect themselves against data loss, ensure compliance with regulatory standards, and maintain operational resilience. A robust backup strategy is mandatory for organizations in an era when data is a critical business asset.
DELIVERING AI-DRIVEN INSIGHTS
In Feb 2024, Infoblox announced the launch of SOC Insights, an AI-driven security operations capability, that boosts the company’s DNS Detection and Response solution, BloxOne Threat Defense. Mohammed Al-Moneer, Senior Regional Director, META at Infoblox shares his views.
Mohammed Al-Moneer Senior Regional Director, META, InfobloxPlease explain the working of SOC Insights capability?
SOC Insights applies AI-driven analytics to analyze massive alert, network, device, user and DNS threat intelligence data to quickly correlate events, prioritize them based on more than just ‘malware risk ranking’, and provide recommendations and tools to quickly resolve the threats that truly matter most. This helps reduce alert fatigue, analyst burnout, and improve SecOps efficiency, enabling them to do more with available resources. This extends to the rest of the security ecosystem as these AI-driven insights can be used to trigger automated responses or shared with other tools in the security stack to make them more effective as well.
For example, when an analyst starts work in the morning, rather that digging through hundreds of thousands of alerts in hopes of identifying the ones that need attention most, the SOC Insights UI has already analyzed these events, correlating them with network and other data, and grouped them into a much more manageable set of ‘insights’ that can be reviewed in a fraction of the time. (i.e. one customer received over ½ million events which SOC Insights distilled down to only 2 dozen.)
Once the analysts have identified the insight they want to work on next, they click on ‘Investigate Insight’ and are immediately taken to a portal where they can pivot around network, event, threat intelligence, and other data in whatever order they wish. This makes it much faster (and easier) to understand the full context around the insight to weigh its true risk, and better understand the work required to address it. A simple example is to consider an attack with high-impact malware, but it is only seen on the guest network. Another is when two types of phishing attacks are identified, and immediate, on-demand access to rich context data can help identify which of these could impact a larger number of users.
How is Infoblox using AI, humans and data dynamics to work together to deliver useful and actionable insights?
Infoblox uses a combination of AI, human expertise, and data dynamics to identify and deliver actionable insights. The AI-driven analytics are trained by DNS experts (humans) who are skilled in cybersecurity and the nuances of DNS, providing our customers with the AI tools autocollect network, ecosystem, event, and DNS threat intelligence while filtering out irrelevant information and recognizing patterns that highlight what is most important. This process is done quickly and automatically within BloxOne Threat Defense, giving the SOC back the hours it could take a human analyst to collect, filter, parse, sort, and otherwise manipulate the data in other tools. Finally, by intelligently collecting only relevant data into threat research and insight investigation portals, our customers’ analysts can start their investigation immediately, leveraging available information on-demand, without digging through individual alerts or waiting on NetOps for user and device information for context around threat activity. This way, Infoblox ensures that the insights delivered are both useful and actionable.
Why is this SOC Insights feature important?
Alert Fatigue, analyst burnout, the skill shortage, and similar issues for the SOC all come from the challenge of having too many security events every day, and too much data to dig through to make sense of it all. SOC Insights is important because it helps security teams by automating much of the important yet time consuming gathering and filtering of data. It then applies AI-driven analytics to this vast amount of data to distill and correlate hundreds of thousands of events into a more manageable set of ‘insights’, each connected to relevant asset, event, threat and other data analysts may need to quickly refer to, to help them understand threats and make informed, effective decisions… fast.
FORTIFYING MOBILE APPLICATIONS
Subho Halder, Co-founder and CTO of Appknox discusses the transforming landscape of mobile application security
Fred Lherault CTO Emerging, Pure StorageElaborate about your focus on mobile application security. Appknox is primarily focused on mobile application security. We are also rated as one of the leaders in the Gartner quadrant for application security. We work with a lot of enterprises, specifically banking, and government services, to make sure that their applications are secure. One of the major differentiators is that we do not need the source code of the application for our work but rather do the binary code analysis of the application. And that’s how we figure out the security issues.
What are Super Apps and what are the security challenges they could bring up?
Super App is a concept that essentially means a single app serving almost every need of the user. For example, a Bank application may have e-commerce options also embedded inside them. It can have an application to pay bills, one perform other transactions or services and all of this is offered via a single app apart from the core services. In terms of security, this increases the threat surface area. For instance, let’s say you are a payment aggregator and are aggregating other commerce apps inside. The attack surface increases because the attack surface is no longer limited to your core offering. The vulnerability can also be inside those super apps. It becomes a little bit tricky for developers and for companies and organizations to coordinate and fix such vulnerabilities. It becomes a problem in terms of whose responsibility it is to fix it.
We must understand the key difference in security between websites and mobile applications. With websites, if you identify a security vulnerability, you can just fix it. With an application that gets downloaded from mobile play stores on phones, you must apply a patch on the application uploaded in the store in case of a bug. But it is not in your hands to ensure users update the application on their phones.
With Super Apps, even if we find a vulnerability, it becomes a little bit complex, to fix responsibility. Shared responsibility is one
of the problems and even if the vulnerabilities get fixed, how can one ship out those fixes and make them accessible to customers? It must be either via an update or upgrade, and that becomes a challenge. And if we look at Super apps, the applications that are inside those super apps may not be any different than if you were to download a standalone e-commerce app. However, with a super App, you could save some space on your mobile. But whether the concept of Super APPs working or not is debatable as it is still in the beta phase.
The ecosystem supports you to build a Super Apps, but it is about the processes, the implementations, governance etc. While the ecosystem already exists through APIs, SDKs, plugins, etc., the processes, governance, privacy, etc needs to be fixed.
How does your solution test application security? Appknox is mostly an automated platform solution. The faster you figure out an issue, the cheaper it is for you to solve the issue. We believe in the Shift Left concept and our automated solution can discover any issue before manual penetration can. Our automated solution offers you DevStack ops integration and App Store monitoring. We monitor the Play Store and the App Store for vulnerabilities in the Apps that have been published. When we diagnose a new vulnerability and if those apps are affected, our solutions can get the apps down, immediately fix them, and then push them back into the app store. That’s one of the solutions we have.
We also have a solution called SBOM based on the software bill of materials that lists all components used to build the application. An app is not just a piece of code but a combination of code to components, plugins, SDKs, and analytics. SBOM provides visibility into the components used, helping identify vulnerabilities and deliver proactive threat mitigation. For instance, in a Super APP, the e-commerce component may not be something the super app provider has developed. They might have taken an SDK from an e-commerce retail platform. In such cases, somebody needs to understand or know when an update is available in the SDK library and if a new vulnerability has been fixed with a new version. This is what SBOM does and also for the first time in the world, it does this via the binary code analysis and not through the source code. Binary data statistical analysis may examine third-party libraries included in mobile application SDKs, providing a deeper knowledge of how apps interact with libraries for various objectives. This is how we help Super Apps stay secure.
How do you view the current threat scenario?
The threat scenario for mobile applications has significantly evolved driven by increased penetration of mobile devices and applications. Alongside, the Mobile application security market is seeing significant growth, as an outcome of this growth in threats. Appknox is a ten-year-old company and ten years ago when we started, we were seeing the advent of mobile apps but now it is the era of Super Apps.
The difference with these Super Apps is that they hold a lot of your personal information and data from logins to passwords. The mobile as a device is a lot more personal compared to even the laptop and more precious because of the personal data it holds.
So, ensuring the security of the apps that hold your trusted information from biometric credentials to financial is paramount.
Which industries could face the most threats?
The heavily regulated industries, Banks, and the entire BFSI industry face significant threats. That is why they must secure themselves and their mobile applications.
Discuss your focus in the region
We cover the whole GCC region and have a good presence here. We work with many banks and count a lot of government entities, and entities from regulated industries among our customers. In addition to a cloud offering, we provide an on-premises solution preferred by the regulated industries because they are concerned about data sovereignty. And we offer that flexibility of deployment to our customers.
How big is the team at Appknox? Discuss the focus on R&D
We have over 65 employees and in R&D we have 15. R&D is a huge focus. Since we don’t work on source code but rather on binary code analysis, that requires a lot of research and development efforts. We also release a lot of white papers, contributing to the industry’s growth. We contribute to open-source help researchers and offer tools in mobile application security. For us, it’s not only about business but also about contributing to the community and industry. This is something that makes me proud.
What are some of the new areas of focus?
We are now focusing on utilizing AI to detect cloned Applications or phishing applications on the internet. If you are an Android user, you should be aware that there could be so many app stores or Android App releases that are not official versions. Yet people try to download those applications, that are not from real developers, but possibly by criminals. We are trying to help organizations whose Apps may be cloned by cyber criminals to bring them down from the App stores or anywhere on the internet with the help of AI.
Can you elaborate on the growth in demand for mobile application security?
Right now, the market is mature. Customers are looking for mobile app security vendors. This was not so even five years ago. The market demand then was primarily for web app security and API scans. The market was not ready, and we were there to educate the market then. I participated in many conferences to tell them about the difference between web security and mobile app security and inform them how we specialize in mobile application security. The market has grown since then and RFPs are coming up because organizations want to fix their mobile app security. Breaches are happening via mobile applications and APIs interacted with by mobile applications. The market understands this growing threat and customers are reaching out to companies such as ours.
How has the growth been?
We are seeing around 70 % year-on-year growth. Since the lockdown days, the growth has taken off as people became more digital in their interactions and transactions. Banks have also gone increasingly digital and that too has driven the growth.
BREACHING NEW AI FRONTIERS
Ramprakash Ramamoorthy, Director of AI Research at ManageEngine discusses ManageEngine’s focus on enabling digital maturity and the focus on developing LLMs
Ramprakash Ramamoorthy Director of AI Research, ManageEngineWhat should be the current focus of IT or AI strategies at organizations and how is ManageEngine helping address this?
We're focusing on enabling AI maturity. It does come down to the ROI from AI. In a lot of organizations over the last three to four years, AI has been heavily hyped and driven. Last year, we saw consumer-focused generative AI taking the world by storm. Thereafter a lot of discussions have been around if AI is going to take jobs away or replace people, but the question is where is the real applicability in enterprises, especially within IT? While there's a lot of hype about using AI in the product, in the company, and in the processes, there is still the challenge of not being able to reap the benefits of AI as expected. This is possible with digital maturity, which is a rolling target.
How do you attain Digital or AI maturity?
The first thing is that you're bringing out process maturity, where you document everything, you digitize everything. Even today, let's say an incident happens in an IT department. Even though you have digital tools to do it, let's say the story of how you roll
back a change or how you fix the issue is not documented, because it's not part of your process. So do a process study, and ensure your processes are completely digitized, which is the first step. In the second step make your data ready, because AI is very data focused. If you get your process points, your data automatically will be AI-ready. Very simple things like using similar date formats and ensuring your data stack is analytics and AI-ready are very important. And the third aspect is about how you use your current automation. Do you have all the alerts in place? Do you have a way to monitor these technologies?
So process maturity, data maturity, and finally, your typical alerts and analytics are the three aspects to take care of and that is when your AI will become very mature, and it will start delivering ROI.
Today, the goal is very simple. Find out what could be the potential solution and at every given point in time, find out what's the next best course of action. You can determine if you want to go ahead. It is highly contextual and very connected. That is where the enterprise will see value in AI.
Discuss your focus on LLMs?
At ManageEngine, we are building our large language models, and trying to fine-tune foundational models into assisting these different touchpoints. It won't be a general-purpose model. There are a lot of contexts and a lot of first-party data access that we have in it. We are trying to agree that the large language models are very efficient, and they have an emergent capability, which is not available in narrow models. We want to use these emergent capabilities in places where it would make an impact, but at the same time, it's also expensive. You need lots of GPUs but we don't want to pass the GPU tax to our customers. The idea is to use it contextually. I see these larger language models becoming a commodity. You have so many open-source foundational models that anybody can get access to, but the value is in contextually combining these models in giving very relevant contextual suggestions.
If you look at all the foundation models that you have, they are possibly trained on the same training data. While you can give a prompt to generate a video, you end up creating similar content as we are all reliant on the same data. You cannot go and make a differentiation in just the models. Rather, the value is in context, by combining these models with your first-party data access. We are fine-tuning these foundational models with specific contexts. We are calling it contextual intelligence that leads to decision intelligence as well.
How can the top management be convinced of the ROI of AI?
When you invest in process maturity, data, maturity, and your analytics and typical automation that you do, you will see a baseline improvement in things like mean time to resolve the number of incidents that you have caught, or the average downtime, and one better way to put us to quantify it in terms of revenue. For instance, instead of saying we experienced downtime for two minutes a day, give the Board the business impact in terms of potential revenue lost because of the outage caused by the incident. Inform them that if we were to invest in the newer technologies that we have today, we could potentially avoid such instances. One way is to link it with dollars and show the impact because IT has moved from the back office to the boardroom. Nobody wants an IT failure and reputational loss. So how do you help with reporting that helps negotiate such challenges? That's why we have built an analytics manager that has a very comprehensive analytics platform where you get data from services, from your security platform, from your monitoring platform, with all these converging into analytics. And that data becomes available via natural language.
You can run simulations, and what-if analyses, ask questions in natural language, find the highlights, and so on. You can combine insights from your business, from your IT, and your operations, because all these three things have become very agile with Cloud. We see all these possibilities for Businesses converging into one big analytics pool.
How have you improved Zia?
Zia is now bringing in the power of large language models. All our models were either small models or medium models last year. I would even say we didn't have medium models but only narrow models and we had small models. Narrow models could just do one thing at a time like anomaly detection or forecasting. And then came smaller language models which are multimodal and can contextually extract relevant information. While these
are not very groundbreaking yet are very subtle fine-tuning that enhances the productivity of the employees.
How do you see your AI focus from the perspective of go-tomarket engagement and sales opportunities?
Today, every company is looking at its AI strategy. When they engage us a vendor, they want to look at what we can offer them in terms of AI. There are more sales conversations because of maturity. And with the emergence of Chat GPT, we are also seeing increased adoption in terms of AI usage. Previously, it was more of a purchase checklist. Do we have AI? Yes, I have AI. But now we are seeing the usage graphs go up.
How has the challenge been on the R&D front?
It's been a very bumpy ride. Because things change so fast, it's very difficult to stay up to date with it. For example, now, once this larger language model started kicking, we were able to see the value because it has an emergent behavior, which narrow models didn't have. Now we are building our large language models. And that requires a lot of computing that involves working with Nvidia, AMD, to get their cutting-edge hardware into our data centers. It is expensive.
We have expertise in AI training models, and in collecting data sets over the last 12-13 years but the newer challenge is about hardware. How do you get all these CPUs? And how do you ensure they work in tandem? How do you ensure the switches that you have in your data center are capable of sharing data at the speed at which the GPU can execute the computing. We have had to upgrade. Now, high-performance computing has become an integral part of the development process. It's no longer a game of just data scientists. It involves data scientists, application teams, and customer-facing product management roles on what to do next how to contextually integrate, and very importantly, it's also related to hardware that has to work very hard in the background.
UNLOCKING VALUE
Vijay Jaswal, Chief Technology Officer of APJ&MEA, IFS discusses the company’s commitment to helping organizations resolve their productivity, predictability and agility issues with their solutions
Elaborate on how the theme of ‘unlock business value, productivity, predictability, and agility’ at the recent IFS Connect Middle East underlines your customer focus.
The theme underlines the importance we attach to providing value to our customers. IFS is committed to helping organizations resolve their productivity, predictability, and agility issues and ‘unlock business value with Cloud and AI’. We provide that value by giving our customers tools that make their lives easier and their operations smoother.
We only sell to key industries. These are aviation, defense, manufacturing, and telcos, energy, oil and gas, utilities, engineering, construction companies, and any services-rich company. Companies in all these verticals have one thing in common; they all have physical assets, be it an engine, a piece of machinery, or a pump in an oil and gas facility. Our technology ensures that all these assets continue to function smoothly, adding value and keeping our customers happy. If they’re not working, they add costs unnecessarily and customers are unhappy. We remove all wasteful activities from the processes and automate efficient processes that help the user.
Elaborate on the new AI capabilities of your platform.
In our up-and-coming release, 24R1, we’re releasing a load of AI capabilities, not just at the front end and not just having copilots, but all the way through. We have various layers around data, data orchestration, bringing different data sources together. Data governance is a key focus area of AI, ensuring data is relevant, data is true to fact, not copyrighted or belongs to somebody else. The copilot capability is something we’re releasing later this month. If a user is using the system and wants to find out some information about the process that they are in, or the functionality of the system, historically, they have to find the correct document, the correct manual, or the correct PDF document or go on the web, and that will consume a lot of time. Now all they must do is just write into the copilot, asking an explanation of the manufacturing processes, and the copilot will access multiple sources, user manuals, technical manuals, and even our online community portal and provide that information within seconds. The user productivity is thus enhanced. While that is from the user’s point of view, at the backend we have been using AI for several years now in scheduling and optimization capabilities. For instance, it could
be the ability to ensure that when you have many engineers, they turn up at the right place at the right time with the right tools to fix the problem, whether it’s at a customer site, or a production line, or in an oil and gas facility. So, we have had this AI capability for several years, and we have only enhanced it even more and added the extra capabilities around simulating possible outcomes.
What’s the feedback from your users around the scheduling optimization features?
The feedback has been amazing when it comes to the scheduling and optimization side of things because a lot of our competition doesn’t do this optimization in real time. For example, if a telco engineer has for instance confirmed an appointment to be at your house, the next day between 9 am and noon, to install your new ADSL router and you have taken half a day off, but no one turns up, which could be quite frustrating. So, if our customer were the telco provider, and you were the customer of the telco organization, in this case, if the engineer was perhaps delayed because of some traffic issue or was delayed at the previous customer site or any other issue, this customer issue can be fixed with the IFS Scheduling Optimization engine. It can in real time deduce that an engineer is going to be late to visit you and then find another engineer that’s close by. It won’t be a three-hour time window for the scheduled visit, it will be more like 15 minutes or half an hour and the engineer will come between 10 and 10:15 am. If you can track where the engineer is, you can even go out and get a quick cup of coffee since you know when the engineer is going to arrive.
How does this take care of cost management concerns, and enhance sustainability?
To answer this, let me elaborate on my earlier example of the
"Companies in all these verticals have one thing in common; they all have physical assets, be it an engine, a piece of machinery, or a pump in an oil and gas facility. Our technology ensures that all these assets continue to function smoothly, adding value and keeping our customers happy. If they’re not working, they add costs unnecessarily and customers are unhappy."
telco engineer. There are a lot of variables in the example. If an engineer turns up but doesn’t have the right skill set or turns up with the wrong parts or the wrong ADSL router, it is a wasted journey. We’re big believers in increasing the first-time fix element of an engineer’s work. So, the engineer needs to go only once and that’s one side of sustainability. The other side is one of our capabilities called remote assistance. In this case, before an engineer turns up at a job, the customer can use the phone and take a picture of the device that’s not working, and the engineer can advise the customer on how to fix it. Or if an engineer does turn up but isn’t trained enough, rather than going back and bringing in a more experienced engineer, that engineer can just call the experienced engineer, and again, with a remote assistant, it’s like augmented reality, the experienced engineer can override and instruct what to do and that goes a long way to help from a sustainability perspective. And in our new release, we’ve also made allowances for electric vehicles, because optimizing and scheduling, electric vehicles versus petrol vehicles is different. While refuelling a petrol or diesel car takes one to three minutes, recharging electric vehicles takes half an hour, if not longer, depending on the range. We make all these allowances into our system pre-builds. And then we can also do things like measuring the carbon footprint, measuring the emissions, and more, conforming to ESG objectives.
How important are the ethical considerations factored into AI models? What determines the precision of AI forecasting? There are many facets to ethics in AI. The quality of data is one such aspect. The ownership of data is another – who owns the data, are we allowed to use that data or is it copyrighted etc. There have been articles about people complaining that some AI engine was stealing the thesis they did at a university and so on. There are several angles and ethical concerns in terms of AI learning models. For instance, a previous company I worked with, offeres facial recognition software that we showcased at GITEX, but it wasn’t accurate because the model had trained on European faces and wasn’t ready for the multi-cultural environment of GITEX. This endorses the fact that the training of the model is also very important. On the data side, there are several layers and one of them is the governance layer that goes a long way to ensuring that the data that goes in is relevant, is correct, isn’t copyrighted etc.
What is further on the immediate roadmap of IFS?
There would be many more vertical AI use cases from a copilot perspective. As I mentioned, we’re releasing IFS copilot. We want to extend that capability to third-party co-pilots. So, for instance, in future releases, if a user needs to find out what the production schedule is in the factories today, they can do that from within teams or PowerPoint, rather than having to go into the IFS environment. We’re looking at enhancing our anomaly detection capabilities around asset management. We already have IoT or sensor-enabled assets, and predictive maintenance capabilities. But we want to push it out and improve it even more. You can expect a lot of enhancements on the AI front and also on the vertical side of our industrial AI.
STEADY EXPANSION
Aigerim Baktybekkyzy, Channel Manager at May Cyber Technology discusses the company’s focus on international expansion and developing next-generation SIEM and UABA solutions
Aigerim Baktybekkyzy Channel Manager, May Cyber TechnologyWhat are the solutions of focus for the company?
We are focusing on developing cybersecurity solutions. We have network access control solution (NAC) and security information and event management solutions (SIEM)as the primary solutions of focus in our portfolio.
When did the company commence operations?
We started our journey as a company in 2005 in Turkey. In the Middle East, we have been operational for two years. We started our journey by developing a network access solution. We are the
leaders in Turkey as far as NAC solution is concerned. For SIEM, we do have a couple of local competitors. We have 80% coverage of the Turkish market. We have a good penetration across most customer verticals. This includes governmental entities, public sector, healthcare, Banks, Education etc.
Discuss your focus on international expansion
Having firmly established as a leader in Turkish market, we wanted to look at the international market. We wanted to target the Middle East markets such as UAE, Saudi Arabia, Qatar, Kuwait, and Oman. So we have partnered with Bulwark Technologies to have our footprint across this region. We already have some customer references here such as in the Oil & Gas sector for instance. We are now starting to develop our business in the CIS region including Azerbaijan and Kazakhstan.
Besides looking to go global, we are looking to expand the solutions range and, in this direction, we have introduced a new solution called Captive Portal, a solution that helps manage guests such as in the hospitality or tourism sector. It can be used not only as an additional module to the NAC solution but can also be used as an individual WiFi hotspot solution.
By the end of the year, we also plan to launch a UEBA (user entity behavior analysis) solution that will use Machine Learning technology to analyze behavior patterns and detect anything unusual. While we plan to be the first to launch such a solution in Turkey, we will also be taking it global and hoping to compete with the current leaders. We are planning to make the solution cost-effective to penetrate new markets and to replace our competitors there.
Is R&D a major focus for the company?
Until 2023, we focused on direct sales but since then we have focused on building a channel network through partners such as Bulwark Technologies. We want to focus on our products and work with partners to penetrate the markets. We currently have 40 engineers in our R&D team and that is our strength.
Discuss your focus on SOC monitoring and SIEM
We offer an integrated platform for organizations to help them effectively manage their security operations centers. We offer features like log reports, alerts, and analysis as part of the SIEM solution. We also have Asset Management as part of the solution. We are now redeveloping our SIEM solution for Turkey and other markets using machine language to ensure customers get the latest SIEM offerings.
SECURING IDENTITIES
Tarun Srivastava, Technical Account Manager - India & South East Asia, Nexus discusses the range of PKI solutions from Nexus
Elaborate about your focus on identity issuance solutions
We are into identity management solutions, issuing identities to people and devices. For organizations of any size, we issue, manage, and utilize trusted identities for their workforce, workplace, and the Internet of Things (IoT). We can issue identities to all devices that accept public key-based or certificate-based identities. The uniqueness of our solution is that we can deal with all three types of identities. One is the visual identity that we show to other people. Then there is a digital identity that we use to access systems and services. And then we have physical access cards like PAN cards issued in India and so on. We can issue all three types of identities from the same platform
How’s the growth in this region for your solutions? What are the different opportunities you see?
It has been good. There have been many customer wins. We have clients in sectors including telecom, banking, government etc. Telecom companies, government entities, utilities, manufacturing, IoT service providers or equipment manufacturers would be our target clients.
For instance, PKI solutions from Nexus help secure communications between machines enabled with IoT sensors in a manufacturing setup and ensure security compliance. Nexus Smart ID IoT platform provides and manages the trusted identities required to secure Energy IoT systems (Smart Grid) applications and ensures security regulation compliance. On the automotive front, we can also secure vehicle-to-vehicle communication for connected cars. On the enterprise front, we also give PKI certificates for both employees and workplace devices. Nexus Smart ID helps issue digital and physical identities to the workforce. Our Smart ID Workplace helps automate enterprise certificate provisioning for both domain endpoints, such as machines and servers, and non-domain endpoints, such as dev ops servers, mobile devices, and networking devices.
How does your solution work in issuing identities?
Our solution is building a public key infrastructure. The relationship between the cryptographic keys is such that they work in pairs. Once you encrypt data with a public key, only the holder of the associated private key can decrypt it. You can’t even decrypt it using the same public key that encrypted it.
So based on the use case, we use the keys in such a way that when you have to prove your identity, then you sign something with your private key. And when someone tries to verify your iden-
tity, then they verify it using this public key. Encryption works in the opposite way, when someone has to encrypt something, they can simply introduce a new public key. And because you have the corresponding private key, you can open that. This is the basis of the solution that we have. Using this technology, we assign public keys to the users and bind them to the user using their credentials. A certificate is essentially a user’s credentials along with a public key. Using that, the identity is established. Wherever a user chooses an identity, he signs it with his private key, and then it can be verified with the public key. For telecom customers, a possible use case is device authentication. Whenever a device comes onto the network, it signs a random number with the private key, and this is ultimately matched by the server random number. If it matches, the device authentication is complete. Then a TLS (Transport Layer Security) session is established between the end device and the Central Server. That happens using a key exchange, which uses public key infrastructure. So it gives encryption solutions, identity, and authentication.
TOWARDS EFFECTIVE THREAT MITIGATION
Jonathan Trull, CISO at Qualys discusses the evolving threat landscape and approaches to cybersecurity
Based on what you are seeing with customers, how do you think the cybersecurity threat landscape has evolved in the last 12-24 months?
The Qualys Threat Research Unit (TRU) recently released research delving into some of the critical vulnerabilities in 2023 and their impact on organizations. Some key takeaways from the research include:
The mean time to exploit vulnerabilities in 2023 stood at 44 days (about one-and-
a-half months). However, this average masks the urgency of the situation due to the long-tail distribution of exploitation. 25% of vulnerabilities had exploit available on the very day they were published. This immediate action represents a shift in the modus operandi of attackers, highlighting their growing efficiency and the ever-decreasing window for response by defenders.
A substantial 32.5% of the 206 identified
vulnerabilities resided within the networking infrastructure or web application domains — sectors traditionally difficult to safeguard through conventional means.
Of the 206 high-risk vulnerabilities
Qualys tracked, more than 50 percent were leveraged by threat actors, ransomware, or malware to compromise systems. 115 were exploited by named threat actors; 20 were exploited by ransomware; and 15 were exploited by malware and botnets.
Discuss the impact and potential of AI from a defender standpoint
Between the exponential increase in the internet attack surface, adoption of the Internet of Things (IoT), Operational Technology (OT) and connected devices and the major shortage of skilled cybersecurity workers, the need for AI and automation in cybersecurity will be essential.
AI and machine learning is much needed in:
• Vulnerability management to identify CVEs (Common Vulnerabilities and Exposures) in software and systems, enabling organizations to assess the cyber risk and prioritize the vulnerability remediation before cyber attackers take advantage of it.
• User behaviour analysis by detecting anomalous user behaviour through user activity logs analysis and identifying patterns that could indicate a potential security threat. This can help prevent insider threats and other types of cyber-attacks.
• Threat detection using AI algorithms able to detect threats by analyzing vast amounts of data from various sources, such as log files, authentication events, user identity, cloud and network traffic. These algorithms can
identify patterns and anomalies that could indicate a security threat.
• Incident response automation by accelerating the response to an incident and enabling security teams to reduce investigation time and remediate security incidents more quickly. Furthermore, automated responses can be triggered when a threat is detected and take action to quarantine a compromised device and/or block malicious or suspicious traffic.
Overall, AI and automation can support cybersecurity solutions in detecting threats and remediating incidents in a quicker manner and more accurately than traditional tools. However, these technologies are not a “silver bullet” and should be utilized in conjunction with other cybersecurity measures, such as employee training and awareness and well define processes, to ensure a comprehensive and effective cybersecurity strategy.
How important is a Zero trust approach and security automation as part of an overall cybersecurity strategy?
The adage, “attackers only need to be right once, while defenders must be right all the time,” suggests there’s room for cybersecurity posture improvement. Simply put, adopting a Zero Trust approach and security automation will help any organization strengthen their cybersecurity posture through more robust authentication methods, micro-segmentation, and least privilege access controls. To satisfy Zero Trust mandates, companies need comprehensive visibility across all internal- and external-facing assets, a view of the complete business context such as asset criticality and dependencies, and an accurate cyber risk assessment of associated vulnerabilities and misconfigurations.
Moreover, the 2023 Qualys TruRisk Research Report found that automation could mean the difference between success and failure. Data-guided patch automation eliminates previously manual tasks and keeps an organization more secure by reducing the time in which threats roam free within the modern hybrid environment. The report found that patches that can be automated are deployed 45% more often and 36% faster than those that must be done manually. Automation isn’t just convenient; it is a substantial defensive security feature for getting critical assets quickly into the safety zone.
Overall, adopting a Zero Trust framework and security automation, as an integral part of your cybersecurity strategy, will allow you to be more effective in mitigating data breaches and insider threats.
Ransomware is obviously a hot topic, can you elaborate on the best practices that are necessary to combat the threat? Prevention is key to protection from ransomware. Many ransomware attacks start with attackers exploiting a known vulnerability to get a foothold in the environment. When organizations do not patch those vulnerabilities — even after being exploited — ransomware groups will exploit the same vulnerability against the same organization time and time again.
Research from the Qualys Threat Research Unit (TRU) found that less than 1% of weaponized vulnerabilities are actively used
in ransomware campaigns. However, even though the number of ransomware vulnerabilities is low, they are highly targeted and actively being repeatedly re-used by ransomware groups. This means that organizations need a more targeted remediation strategy to know how to prioritize which vulnerabilities need to be patched first to lower their risk of a ransomware attack.
As such, here are three actions organizations can take for an efficient ransomware prevention program:
1. Prioritize ransomware vulnerabilities
2. Focus on patching vulnerable external-facing assets
3. Automate browser and document reader patches
What are some of the biggest threats/challenges from an OT standpoint and what should organizations be looking at in order to take a holistic approach to IT/OT security
OT technology poses a unique challenge for those in charge of protecting it. OT technologies are historically older, sometimes not supported by the original vendor, and security tools are rarely certified to embed within the technology like traditional software. OT technologies are also a prime target for nation-state actors, especially in a conflict, as they are responsible for controlling key physical processes that an adversary would want to degrade. To take a holistic approach across IT/OT, the first step is to ensure you have a solid inventory of assets. Next, Qualys has the ability to detect vulnerabilities and misconfigurations in a passive manner so that OT systems won’t be inadvertently taken offline. It is then a matter of prioritizing the most risky vulnerabilities and misconfigurations for remediation. Network isolation is also a key strategy for protecting OT technologies.
What are the top 2-3 things organizations should be doing in 2024 in order to improve their cybersecurity posture
To accurately assess the genuine risk presented by open vulnerabilities within their organization, it’s essential that businesses employ a comprehensive set of sensors, ranging from agent to network scanners to external scanners. In addition, it is imperative to thoroughly inventory all public-facing applications and remote services to ensure they are not vulnerable to high-risk vulnerabilities. And finally, I’d advise organizations to employ a multifaceted approach to the prioritization of vulnerabilities — focus on those known to be exploited in the wild (start with the CISA KEV), those with a high likelihood of exploitation (indicated by a high EPSS score), and those with weaponized exploit code available.
How do you foresee cybersecurity budgets and investments changing in response to evolving threats and priorities in 2024?
In 2024, I think we will see security teams continue to closely evaluate their security investments and look for opportunities to consolidate. We are seeing more regulatory pressures, and this will drive more investments in governance, risk, and compliance, especially as it relates to risk quantification. Boards and executives are pushing CISOs to justify their security programs using common business language, with a focus on risk and risk mitigation. Equally important will be increased investments in resiliency controls.
AUTOMATION
THE KEY TO DATA PROTECTION IN THE CLOUD
Maher Jadallah, Senior Director Middle East & North Africa at Tenable writes that automation holds the key to data protection in cloud environments
Maher Jadallah Senior Director Middle East & North Africa, TenableOne of the core challenges of cloud computing is its complex security needs. As organizations traverse the digital landscape, the convergence of systems, applications, and users blurs traditional boundaries, requiring a reinterpretation of traditional data and identity silos. Under the shared responsibility model for the public cloud, protecting identities and data is always the responsibility of the enterprise rather than the cloud service provider. Even in Software as a Service (SaaS), customers are still required to protect their own data, identities and application configurations.
Identity and access management (IAM) systems usually serve as the core method for defining access rights and permissions, enabling organizations to centrally manage authentication, Single Sign-On (SSO), and authorization across multiple systems and applications. Yet, legacy IAM systems struggle to adapt to the dynamic nature of cloud environments, where access requirements change frequently. To address these challenges, the security industry has responded with innovative solutions designed to operate at cloud scale, such as cloud extensions to Identity Governance and Administration (IGA) offerings and the adoption of attribute-based or policy-based access control (ABAC or PBAC). The IAM landscape is further complicated by introducing new entities like containers, serverless architectures, and IoT devices. These entities present unique access challenges, necessitating innovative approaches to identity management. By leveraging data awareness and automation, organizations can streamline their IAM processes and mitigate security risks in cloud environments. Data protection solutions have evolved on a similar path to IAM systems. Data Loss Prevention (DLP) solutions have enabled organizations to discover, classify, and monitor the movement of sensitive data within their networks, and apply adequate policies. As in the case of identity and access management, new data-centric solutions were introduced to tackle the challenge of protecting data in the cloud. Cloud access security broker (CASB) systems, for example, can be used to identify unsanctioned cloud applications, monitor and control data, and encrypt traffic to the cloud through a centralized platform.
However, the effectiveness of DLP solutions hinges on constant data classification and policy refinement, which can be challenging in dynamic cloud environments. While CASBs can be used to restrict user access to an application, they don’t address visibility and management of identities and permissions in the cloud at the user, application and resource level. Fortunately, Cloud security posture management (CSPM) systems can help partially fill this gap, enabling organizations to continuously monitor cloud platforms and alert on misconfigurations and potential compliance issues. For example, CSPMs can detect misconfigured Amazon Simple Storage Service (Amazon S3) buckets that may expose organizations to the leaking or loss of sensitive data.
Much like identity-centric security lacks an awareness of the data side, neither DLP nor CASB and CSPM, nor a combination of these products can provide integrated insight into identities. Similarly, making decisions based solely on the sensitivity of data with no insight into user behavior and contextual understanding of their actions may result in misidentification of major risks, multiple false positives and disruption to business.
As organizations are required to constantly adapt their policies and controls, IT and human resources (and consequently, budgets) are pushed to their limits. Many of these organizations are approaching a tipping point where the scale and flexibility of cloud environments may be too much to deal with, resulting in increased exposure to risk. The key to addressing the challenge of managing identities and permissions in the cloud at the user, application and resource level is to introduce automation, thereby reducing the level of required human resources. Bringing down identity and data silos is essential for achieving this goal. By effectively leveraging data-awareness, we can establish a decision-making framework that distinguishes between legitimate and excessive permissions based on contextual understanding of the risk they pose to critical data or resources, and enforce least privilege policies accordingly.
The disconnect between identity-centric and user-centric security is deeply rooted in existing cybersecurity paradigms. To create the necessary paradigm shift, a new security model should introduce capabilities based on several key principles. Firstly, policies should ensure that users, applications, machines and services can access only the data and resources that are necessary for their legitimate purposes, per their current needs and status. The incorporation of data-awareness into an access management framework could significantly improve its least privilege posture through a more accurate, ongoing assessment of risk.
Next – as said before – automation is the ultimate prescription for scale issues. The process of creating and enforcing least privilege policies (at least, in the most common cases) should be done rapidly, at scale and with minimal involvement of Dev or Ops teams. This way, organizations can gradually achieve least privilege while allocating other resources to identify and resolve complicated permissions and investigate unknown access events. It is also important to remember that not all access permissions are equal. Given the number of access policies in modern cloud environments, you must be able to differentiate between how you
handle each of them. The level of risk can be attributed to the sensitivity of the data where it resides, the entity that holds the permissions, and so forth.
It is also crucial to identify the myriad of entity types that are accessing cloud resources. From human users to applications and bots, similar principles and logic should be applied to all entity types to ensure comprehensive security across any cloud environment, without impacting application continuity or speed to market. Most importantly, security systems should be able to identify and mitigate access-related risks with minimum disruption to normal business operations.
While there will continue to be evolving security challenges in cloud environments, there are a growing number of tools and measures that can be implemented to mitigate risks and ensure that a robust security framework is in place at all times. By leaning into automation to ease the burden of managing data policies, organizations will face less issues with scaling their cloud environments and be able to free up critical business resources.
HOW GENERATIVE AI ACCELERATES DIGITAL TRANSFORMATION
Lori MacVitte F5 Distinguished EngineerLori MacVittie, F5 Distinguished Engineer says the catalytic nature of generative AI generates significant impact, usually when it accelerates an existing trend
The disruption of a global pandemic and its initial impact on enterprise digital transformation momentum is behind us. But the rapid acceleration, the sudden shift to remote work, and a reliance on digital services should remind us of the impact of disruptive events as we eye up the arrival of generative AI on the scene.
The acceleration of enterprise digital transformation can be seen everywhere from DoorDash to the dominance of streaming entertainment to the establishment of a hybrid workforce, something that was unthinkable before 2020.
And we’re seeing it happen again with generative AI, sort of.
It’s not that generative AI isn’t cool—it is. And it isn’t that generative AI isn’t going to change a lot of things—from the way we work to the way we learn to the way we live life—it is. But on its own, generative AI isn’t any more useful than analytics. Both fail to produce value without a question in need of an answer. Its real impact is seen when it intersects with existing technologies.
» COLUMN
The catalytic nature of generative AI generates significant impact, usually when it accelerates an existing trend.
Modern Applications
For example, modern applications were already set to overtake traditional applications in the next few years. But AI threw fuel on that fire and we’re already seeing modern apps breach the apex of dominance in the enterprise portfolio, because AI is a modern app, and so are the applications being built to take advantage of it.
APIs
APIs were already racing toward the top of the priority stack for delivery and security. AI has made everything about APIs a critical priority that’s likely to push general security off the throne. Because most folk are building modern apps, which rely on APIs, and integrating AI services using, you guessed it, APIs.
Hybrid and Multicloud
Generative AI relies on significant compute, storage, and network resources. The kind of resources that are going to amplify the existing hybrid IT operating model and exacerbate the challenges of multicloud estates. The brains behind generative AI—LLMs— are likely to live in a public cloud but there will be some that stay on-premises. And the apps being built to use those LLMs? They’ll be multicloud too. If you weren’t certain hybrid IT was here to stay, the reality of the resources required for training and inferencing along with a healthy requirement to maintain the privacy of private data is going to solidify the normalcy of the hybrid IT operating model.
AIOps
Generative AI is accelerating the shift to AIOps as well. It’s the tool AIOps was waiting for, and there’s no dearth of solutions already finding ways to take advantage of this technology’s ability to generate content, code, and queries. In fact, generative AI will take us beyond today’s most mature method—automated scripts—to a state in which the system is able to not only execute the scripts but generate them and the correct policies to boot. It moves the needle for automation from “automated” to “autonomous.” The impact on operations will be profound, although it won’t be fully felt for years, but it’s coming.
All of these factors will accelerate rapid changes to accommodate the needs of apps that leverage AI as well as the organizations that build and operate them. Privacy, security, and responsibility will drive innovation across every enterprise domain, but especially data, app delivery, and security.
But all of these—modern apps, APIs, multicloud, hybrid IT, and AIOps—were already trending upwards before OpenAI introduced ChatGPT. Generative AI simply accelerated the rate at which they were already heading. Which is pretty much what COVID did to enterprise digital transformation, except with AI we’re going to see a lot more change.
AI’s biggest impact is not going to come from its mere existence, but from how it impacts people, processes, and products.
THE ECONOMICS OF CYBERSECURITY INVESTMENT IN DIGITAL WORLD
Bharat Raigangar Board Advisor, 1CxO, vCISO CyberSecurist & Mentor at Finesse discusses the importance of carrying out a high-level analysis of crucial elements within the cybersecurity economy
In today's digital landscape, the significance of cybersecurity and cyber resilience cannot be overstated. With businesses worldwide contending with rising cybersecurity costs, it's crucial to navigate the complexities of establishing efficient cybersecurity ecosystems. This article aims to offer insights gleaned from over 28 years of experience and discussions with industry leaders, guiding organizations in maximizing return on investment (ROI) through strategic cybersecurity initiatives.
As governments and businesses increasingly rely on digital infrastructure, they face a dual challenge: while the digital realm presents opportunities, it also exposes them to growing cyber threats. These threats can manifest in various forms, including financial losses, intellectual property theft, operational disruptions, and reputational harm. Consequently, companies are investing heavily in advanced security solutions, underscoring the critical importance of prioritizing CyberResilience in today's digital age. Navigating the realm of Cybersecurity Economics entails a comprehensive cost-benefit analysis of implementing cybersecurity measures. This encompasses various cost elements, such as personnel, processes, technology, research, governance, compliance, intelligence, defense, training, insurance, and vendor management. Effectively managing these costs requires organizations to conduct thorough analyses and prioritize investments based on their unique risk profiles, business objectives, and budget constraints.
It's essential to recognize that there's no one-size-fits-all approach to cybersecurity. The optimal investment level can vary significantly depending on factors such as organization size, industry, regulatory environment, and risk profile. Hence, a strategic approach is necessary, aligning security demands with operational efficiency, risk appetite, and business growth.
It is important to carry out a high-level analysis of crucial elements within the cybersecurity economy, including market trends, investment patterns, the impact of digital transformation, the cost of cyber incidents, regulatory compliance expenses, and the role of cyber insurance. One has to understand the importance of adopting a holistic approach to cybersecurity, covering risk
assessment, management, awareness training, crisis response, investment in advanced solutions, and compliance.
Moreover, it explores various operating models and practices, such as outsourcing, insourcing, cloudification, hybrid models, and third-party vendor risk assessments. These approaches aim to optimize cost-effectiveness while enhancing security posture. Return on Security Investments (ROSI) emerges as a vital metric for evaluating cybersecurity effectiveness. By leveraging Cost Reduction Quotient (CRQ) modeling, organizations can make data-driven decisions to optimize their cybersecurity budgets, achieving significant cost savings and risk reduction.
In conclusion, organizations must invest wisely in cybersecurity, considering their unique risk profiles and strategic objectives. By doing so, they can build resilient cybersecurity postures that not only protect against threats but also contribute to long-term business success
DELL POWEREDGE T160
The Dell PowerEdge T160 brings compact computing to small businesses and remote offices looking for powerful, dense configurations. At almost half the physical footprint (42%), the stackable T160 offers a lower carbon footprint via the increased use of sustainable materials
including an unpainted metal chassis. The server is up to 23% more power efficient compared to the previous generation.
It features the Intel Xeon E-2400 Processors, offering double the performance compared to the previous generation. The T160 is ideal for organisations looking to do real-time data processing at near-edge installations. For those working in harsh environments, the T160 is equipped with filter bezels, shielding the inner hardware from dust and grease particles, and helping ensure unobstructed airflow for better performance and acoustics.
Dell’s PowerEdge T160 is a compact, efficient entry level server, which aims to embark on the journey of sustainability. The T160 has an unpainted chassis designed to support recycled materials. This well-designed server aims to reduce chemical usage and contribute towards sustainability by utilizing several features.
The appearance of unpainted steel is utilitarian yet attractive and industrial. Embracing the durable zinc coating, this
THINKPAD P1 GEN 7
The ThinkPad P1 Gen 7 is Lenovo’s ultraportable and high-performance mobile workstation designed for intensive machine learning tasks, providing the flexibility to work from any location. The premium aluminum construction of the device complements its powerful internals, showcasing cutting-edge AI technology.
Equipped with the latest Intel vPro, Evo Edition featuring Intel Core Ultra processors, integrated NPU, and up to the NVIDIA RTX 3000 Ada Generation laptop GPU, the ThinkPad P1 Gen 7 delivers incredible power and performance. By harnessing the collective strength of the CPU, NPU, and GPU, this Lenovo workstation is optimized to meet AI processing requirements effectively.
product has been left unpainted to reduce material and waste while leaving the steel protected. Careful choices were made to design parts that don't require secondary processes like welding, grinding, and painting. This gives the metal aesthetic Dell typically strive for without the need for metallic paints. The T160 supports the use of recycled steel and will continue increasing the percentage of recycled steel used in this server.
Highlights:
• Up to 23% more power efficient compared to the previous generation.
• It features the Intel Xeon E-2400 Processors, offering double the performance compared to the previous generation.
• Equipped with filter bezels, shielding the inner hardware from dust and grease particles
The new laptop features a liquid metal thermal design in select configurations which enhances cooling performance and long-term reliability, catering to critical workflows when complex tasks require maximum performance for extended periods. Equipped with a 16-inch display boasting a 16:10 aspect ratio, narrow bezels providing a 91.7% screen-to-body ratio, and the choice to configure an OLED touch screen, the ISV certified1 ThinkPad P1 Gen 7 is the ideal device for content creators, data scientists, game developers, and CAD application specialists. Tailored to meet the needs of modern hybrid workers, the P1 Gen 7 includes features such as low blue light displays, Dolby Vision, and color calibration for sharp, vivid images.
AXIS Q9307-LV DOME CAMERA
Axis Communications announces a multipurpose dome camera combining sharp video, two-way audio, actionable analytics, and LED indicators to help improve safety, security, and operational efficiency. This all-in-one device makes it possible to optimize staff resources with proactive surveillance. For instance, it can be used for tele-sitting to observe patients in healthcare environments. Or to remotely detect and respond to loitering in retail environments.
AXIS Q9307-LV Dome Camera comes with coughing fit and stressed voice analytics adding an extra audible dimension to active incident management. It offers a great solution for remote monitoring and communication. With AXIS Live Privacy Shield, it’s possible to remotely monitor activities while safeguarding privacy in real-time. In addition, it includes both an LED indicators and an Audio LED so it’s clear to see when the camera is recording or when audio is being used.
Including Lightfinder 2.0, Forensic WDR, and OptimizedIR, it delivers sharp 5 MP image quality under any light conditions. With four built-in microphones and a built-in speaker with echo cancellation, this all-in-one audio-visual device offers clear two-way audio communication with great noise suppression. This makes it easy to transmit and receive audio even from remote locations.
This multipurpose dome camera ensures a cost-efficient solution and with just one device to install, it offers one-drop installation. Additionally, video and audio analytics are included at no extra cost. It’s vandal-resistant and can withstand daily
Highlights:
5 MP video with two-way audio
Preinstalled audio and video analytics
Remote monitoring while safeguarding privacy
Withstands chemical wipe-downs
Cost-efficient all-in-one-device
wipe-downs with chemical detergents without deteriorating the plastic. Furthermore, Axis Edge Vault, a hardware-based cybersecurity platform, safeguards the device and protects sensitive information from unauthorized access.
Highlights:
The ThinkPad P1 Gen 7 is the world’s first mobile workstation2 to include LPDDR5x LPCAMM2 memory, up to 64GB. LPCAMM2 delivers one of the fastest energy efficient modular memory solutions for PCs.
Optimized to meet AI processing requirements effectively
Equipped with a 16-inch display boasting a 16:10 aspect ratio, narrow bezels providing a 91.7% screen-to-body ratio
MORE ORGANIZATIONS EVOLVING THEIR D&A OPERATING MODEL BECAUSE OF AI
38% of CDAOs said that their D&A architecture will be overhauled over the next 12-18 months
Sixty-one percent of organizations are forced to evolve or rethink their data and analytics (D&A) operating model because of the impact of disruptive artificial intelligence (AI) technologies, according to a new Gartner, Inc. survey.
The annual Gartner Chief Data & Analytics Officer (CDAO) survey was conducted from September through November 2023 among 479 chief data and analytics officers, chief data officers (CDO) and chief analytics officers (CAO) across the world. “Responding to the rapid evolution of D&A and AI technologies, CDAOs are wasting no time in making changes to their operating model,” said Alan D. Duncan, Distinguished VP Analyst at Gartner. CDAOs are doing it to support data-driven innovation and accelerate organizational agility, with data governance at the core (see Figure 1).
When asked about changes CDAOs need to make to their D&A operating model to be fit for current and future purpose, 38% of CDAOs said that their D&A architecture will be overhauled over the next 12-18 months. Twenty-nine percent of respondents said they will revamp how they manage data assets and adopt and apply governance policies, practices and standards.
CDAOs Are Expanding Their Responsibilities
“While the management of their organization’s D&A operating model is increasing year over year, no other role than the CDAO has the responsibility of many of the key enablers of AI, which include data governance, D&A ethics, and data and AI
literacy,” said Duncan. “The scope of responsibilities of the CDAO role has also expanded as budget and resource constraints become even more of a problem.” Among the CDAO’s key responsibilities are managing the D&A strategy (74%) and D&A governance (68%). Being accountable for AI is also high on the CDAO’s agenda. The survey found that 49% of CDAOs said generative AI (GenAI) is within their scope of primary responsibilities. AI is within scope for 58% of CDAOs, which is an increase from 34% in 2023.
CDAOs to Negotiate the Way D&A Is Funded
The expansion of responsibilities entails a significant cost for CDAOs. Among CDAOs who report a year-over-year increase in their function’s funding, 46% still report budget constraint as a challenge. “CDAOs who present better business cases to CFOs, receive better and quicker funding
for their D&A initiatives. They also gain higher executive buy-in,” said Duncan.
CDAOs must explain to the CFO how any change in D&A funding models aligns to the ratio of D&A value propositions as a utility, enabler or driver of the organization. “However, only 49% of surveyed CDAOs have established business outcome-driven metrics that allow stakeholders to track D&A value. In addition, 34% have not established business outcome metrics for D&A,” said Duncan.
CDAOs need to grow their power and influence to make things happen. They also must understand the value levers and pain points of the organization end to end to showcase their value to the board. “If not, by 2026, 75% of CDAOs who fail to make organizationwide influence and measurable impact their top priority, will be assimilated into technology functions,” said Duncan.
Creating A Bold Future For Africa
Vibin Shaju General Manager – UAE, Trellix