A COMPLETE GUIDE TO THE BIGGEST TRENDS, CHALLENGES AND OPPORTUNITIES TO HELP SECURITY PROS COMBAT CYBERTHREATS IN 2025.
Our expertise, your Future advantage
Future B2B merges decades of expertise with the nimbleness of a startup. Our established brands, like SmartBrief, ActualTech, and ITPro deliver expert-led niche newsletters, cuttingedge advertising solutions, pipeline-enhancing lead generation, and unforgettable live and virtual events.
Delivering valuable and reliable content is the smartest way to engage and inform your audience
INFLUENTIAL READERSHIP
Future B2B’s qualified audiences spans 16 industries, 200+ newsletter and nearly 10 million leaders.
SOLUTION-DRIVEN PRODUCTS
PERSONALIZED EXPERIENCE
Optimize your campaign with direct access to your FutureB2B account management team.
ENGAGED AUDIENCE
Benefit from Future B2B’s proprietary email platform to reach a targeted audience in a brand safe, contextually relevant environment.
Gain access to a secure portal to view campaign results, including company and persona-level engagement.
Cybersecurity: Insights and strategies for 2025
DISCOVER THE LATEST TRENDS IN CYBERSECURITY AND LEARN HOW TO COMBAT INCREASINGLY SOPHISTICATED CYBERTHREATS.
In terms of cybersecurity, 2024 has been a busy year. Several cyber-related stories made headlines, including breaches at United Healthcare, Ticketmaster and Dell, to name a few. The CrowdStrike incident swept across industries and continents, and even though CrowdStrike was not reported as a cyberattack, it was a good reminder that security should be at the forefront for all businesses and their employees.
According to IBM and Ponemon Institute, the average cost of a data breach was up 10% year-over-year in 2024 to $488 million, which according to the companies, is “the highest total ever.”
This ebook explores compliance, data governance, AI and the hybrid cloud through the security lens. It also delves into how software engineers are approaching AI tools and looks at ways to outsmart hackers without breaking the bank. It pulls the curtain back on what’s ahead for passwords and smart devices, examines workers’ stress and dissects different aspects of machine learning.
We need to arm our cyber pros with as much knowledge as possible because if we thought 2024 was busy, 2025 looks like it will be even busier as criminals look for new ways to attack and security professionals create new ways to defend.
On a brighter note, global information security end-user spending is expected to reach $212 billion in 2025, a 15.1% increase over 2024. Spening on generative AI tools is expected to be a big part of this boost in investments.
Check out this ebook to educate yourself on the latest trends and technologies for combating and recovering from attacks as well as how to prepare for what’s next in cybersecurity.
Susan Rush, Director of Content
FUTURE US 7th
CONTENT Director of Content
Susan Rush susan.rush@futurenet.com
Global Content Director, B2B IT Maggie Holland maggie.holland@futurenet.com
VP of Content, SmartBrief
Melissa Turner melissa.turner@futurenet.com
Senior Design Director
Rosie Webber
SALES
Managing VP of Sales
Dena Malouf dena.malouf@futurenet.com
Associate Publisher, Information Technology and Cybersecurity
Brenna Smith brenna.smith@futurenet.com
Cybersecurity spending is going to surge in 2025; AI threats are a key factor
ENTERPRISES GLOBALLY HAVE ALREADY RAMPED UP CYBERSECURITY SPENDING TO CONTEND WITH NEW THREATS – AND THERE’S MORE TO COME IN THE YEAR AHEAD.
By Emma Woollacott
Cybersecurity spending will continue to rise in 2025 amid heightened concerns about the threat of AI, according to new research from Gartner.
Global information security end-user spending was on pace to reach $183.9 billion in 2024, according to the Gartner report, which predicts a 15.1% increase to $212 billion in 2025.
A key factor in this spending growth, Gartner said, is the continued adoption of generative AI tools that are boosting investments in security software markets. Application security, data security and infrastructure protection efforts are fueling this investment spree, the study noted, which will trigger a further spike in spending in 2025.
There’s already been a surge in 2024 in the use of large language models and associated tools to carry out large-scale social engineering attacks, Gartner said. Furthermore, the research firm predicts that 17% of cyberattacks or data leaks will involve generative AI by 2027. Meanwhile, as organizations continue to move to the cloud, Gartner expects
Global information security end-user spending is expected to increase 15.1% to reach $212 billion in 2025.
there to be an increase in cloud security solutions. The market for cloud-native solutions is also predicted to grow.
The combined cloud access security brokers, or CASB, and cloud workload protection platforms, or CWPP, market is estimated to reach $8.7 billion in 2025, up from a forecasted $6.7 billion in 2024.
“The continued heightened threat environment, cloud movement and talent crunch are pushing security to the top of the priorities list and pressing chief information security officers to increase their organization’s security spend,” said Shailendra Upadhyay, senior research principal at Gartner.
“Furthermore, organizations are cur-
rently assessing their endpoint protection platform and endpoint detection and response needs and making adjustments to boost their operational resilience and incident response following the CrowdStrike outage.”
CYBERSECURITY SPENDING SOARS ACROSS THE BOARD
Investment in the security services market – covering security consulting services, security professional services, and managed security services – is expected to grow faster than other security segments. This, Gartner said, is largely driven by the global skills shortage in the cybersecurity industry.
Gartner’s research aligns with previous analysis of cybersecurity spending published earlier in 2024. A study from IDC showed the threat from AI is expected to fuel a 12.3% rise in security spending across Europe.
“The increasingly sophisticated tools available to cybercriminals — now including generative AI — are transforming security from a technical requirement to a key strategic factor for companies across all industries to stay competitive on the market,” said Stefano Perini, research manager for European data and analytics at IDC.
Perini noted that spending is heightened in several key industry verticals, such as banking, defense, telecommunications and government, all of which are industries where cyberattacks can have “dire consequences on both business operations and the organization’s reputation.” n
Gartner Getty
SPONSORED
Cybersecurity compliance evolves with AI and data governance
ADVISEUP GUIDES FIRMS THROUGH A COMPLEX CYBERSECURITY LANDSCAPE.
By Susan Rush
With data breaches on the rise, cybersecurity compliance is becoming increasingly complex. By 2025, organizations will need to address AI-related disruptions and stricter data management to meet regulatory demands. SmartBrief dug into the world of cybersecurity and compliance with Dorina Hamzo, founder and CEO of AdviseUp Consulting. Hamzo advises against check-the-box compliance, advocating for robust security programs and AI-driven tools.
As we look to 2025, what emerging technologies or practices are expected to significantly impact cybersecurity compliance requirements, and how
should organizations adapt their strategies to stay compliant?
Dorina: To prepare for 2025, we first need to reflect on recent trends. In 2023 and 2024, we saw a staggering 72% increase in data breaches, according to the IBM 2024 Breach Report, with US companies facing an average breach cost of $9.36 million. This alarming trend related to the recovery cost is largely attributed to a growing shortage of cybersecurity expertise skills and increasing third-party breaches.
By 2024, AI adoption surged to 72% (McKinsey), presenting both opportunities and threats to cybersecurity. Regulators are responding to these challeng-
es with more stringent inspections, while new regulations like the EU AI Act and the SEC Cybersecurity Risk Management rule impose stricter compliance requirements on organizations. Given these dynamics, by 2025, we can expect the following key impacts on compliance requirements. Keep in mind this is not an exhaustive list.
1. AI technology disruptions: Organizations will need to demonstrate responsible AI use, including transparency and bias mitigation, to meet compliance expectations. Additionally, AI security tool adoption will become a great mitigation to the resource
shortage and will provide greater detection and prevention capabilities of security threats.
2. Stricter data management: As data breaches continue to escalate, implementing robust data governance practices is essential for reducing incidents and ensuring compliance with privacy regulations. Notably, breaches involving shadow data take an average of 26.2% longer to identify and 20.2% longer to contain compared to those without such data. This highlights the critical need for organizations to maintain visibility and control over all data assets.
3. The need for third-party risk management: As third-party breaches contribute significantly to data incidents, organizations will need to enhance their vendor management processes to ensure compliance and security.
4. Shortage of cyber-resources: The ongoing cyberskills shortage, which has affected over half of breached organizations, highlights the need for increased investment in cybersecurity training and personnel.
5. Adoption of more controls: Companies will likely need to adopt additional security controls and frameworks to meet regulatory demands and bolster their defenses.
ples include proper risk management, vendor oversight, incident response, access controls and vulnerability management.
• Invest in AI-driven compliance tools to enhance monitoring and response capabilities. As mentioned above, make sure the existing processes and controls are mature before investing in tools.
• Develop comprehensive data governance frameworks to manage data and ensure it remains private and secure.
• Prioritize ongoing training for the organization and security resources to stay current with emerging risks and respond quickly to threats. A high level of employee training reduces the cost of data breaches by about 18%.
• Adopt a framework and obtain external certifications such as ISO 27001, 42001 or HITRUST to obtain and maintain assurance over compliance.
In what ways can organizations improve their preparedness to avoid the need for rescue missions in compliance areas like data privacy, cybersecurity and internal controls?
Dorina: Avoid check-the-box security and compliance activities which traditionally involved implementing ineffective controls, engaging infective auditors and buying tools hoping they would solve the problem. Taking the time to set up robust programs from the beginning will prevent constantly being in firefighter mode. Consider the following priorities:
• Invest in establishing security and governance processes and controls. This needs to be done before investing in process automation. Some exam-
How can organizations ensure that compliance does not hinder their long-term strategic goals?
Dorina: Compliance impacts both short-term and long-term strategic goals. Security incidents and regulatory violations will erode the value of the company and cause employees to be distracted from achieving long-term goals. Therefore, it is important to integrate security and compliance objectives into the overall company strategy. To balance all of the competing priorities, companies should invest in understanding the current state of the security and compliance posture and develop a road map of security and compliance improvements that align with the company’s goals and risk profile.
Why are companies investing more and more in cybersecurity while the instances of data breaches and regulatory penalties are not decreasing?
Dorina: Many organizations struggle to build effective security programs due to a lack of understanding about best practices and ineffective resource selection.
Additionally, inadequate data management contributes to vulnerabilities that are exploited by hackers. Lastly, some companies do not view controls as a way to become more secure. Instead, they view them as a compliance check-thebox activity. That leads to wasted spend and gaps in protection at a time when attacks are getting more sophisticated.
What distinguishes AdviseUp from its competitors in the cybersecurity and compliance space?
Dorina: At AdviseUp, we are more than just consultants—we are your partners in navigating the complex worlds of audit, risk and compliance.
We have decades of practical, onthe-ground experience building and implementing programs with a diverse range of organizations, from Fortune 300 companies to nonprofit entities. We have built and managed over 20 programs from the ground up; remediated several material weaknesses and significant deficiencies; had zero data breaches at companies where we managed the risk and internal control programs; and received certifications in record time for companies where we performed their readiness assessment and managed their ongoing compliance with SOC, ISO, HITRUST, PCI and NCQA.
Our approach:
• Building programs from the ground up
• Accountability and integrity is everything
• We see ourselves as a part of your business n
Dorina Hamzo is the founder and CEO of AdviseUp Consulting LLC, which offers audit, risk and compliance services. Before that, she held the VP, chief audit and risk officer role in several Fortune 500 companies with a global presence. She has authored publications in the Internal Auditor magazine and ISACA.
Security and compliance concerns are driving the shift to hybrid cloud
ENTERPRISES ARE FLOCKING TO HYBRID CLOUD DUE TO RAPIDLY CHANGING REGULATORY REQUIREMENTS.
By Emma Woollacott
UK organizations are increasingly moving away from on-premises environments in favor of managed hosting services.
According to technology research and advisory firm Information Services Group, large and mid-market firms are shifting their cloud environments away from the public cloud and back to the private cloud, or to a hybrid model.
The 2024 ISG Provider Lens Private/Hybrid Cloud—Data Center Services report found that change is being driven largely by the growing need for strengthened security and compliance with changing regulations.
“Service providers can assist mid-market enterprises in developing these hybrid cloud strategies,” said Anthony Drake, ISG partner, North Europe.
“They offer reduced deployment costs, disaster recovery solutions and access to modern computing technologies such as serverless architecture, database as a service and DevOps practices,” Drake said.
Many UK enterprises are gradually shifting their focus from on-prem environments to outsourced services such as managed hosting for data storage and business continuity purposes.
This, ISG said, reduces the burden of operating a private data center while maintaining control over the hosted data. It also provides enterprises with extra flexibility and features, such as multi-cloud connectivity, low-latency network connectivity, bare metal services, platform-agnostic operating systems, and database support.
Meanwhile, the report noted that data centers in the UK are positioning themselves as connectivity hubs, offering access to advanced communication infrastructure such as fiber, dark fiber, internet exchange points, or IXPs, and subsea cables.
“Large enterprises in the UK are looking for providers who can offer reliable uptime, secure data storage and high-performance network connectivity,” said Jan Erik Aase, partner and global leader, ISG Provider Lens Research.
“The priorities for midsize enterprises lean toward comprehensive technical support, including 24/7 monitoring, rapid incident response and proactive maintenance that will ensure maximum uptime and availability.”
With rising energy costs and general economic pressures, ISG found
many UK companies are struggling to achieve the sort of financial stability they need to adopt new technologies such as AI
The firm advised focusing on cloud optimization, with FinOps offering the best solution. Organizations should also resist the “London-first” approach and consider data centers in other regions such as Manchester, Slough and Birmingham, which may be more cost effective.
Similarly, rather than hiring new staff, they should train their existing workforce, perhaps with help from their cloud service provider. They should also keep a careful eye on potential security gaps in their infrastructure.
“Enterprises that grow complacent about security may face a rather rude awakening,” the researchers warned. n
T u r n t o d a y ’ s t r e n d s i n t o s e c u r e s t r a t e g i e s .
Y o u r p a r t n e r i n d e v e l o p i n g a c t i o n a b l e , y e a r - r o u n d s o l u t i o n s .
Technical expertise
Proven track record
Long-term relationships
Software engineers are in for a rough ride as AI adoption ramps up
GARTNER PREDICTS THE TECHNOLOGY WILL CREATE NEW ROLES THROUGHOUT SOFTWARE ENGINEERING AND OPERATIONS BUT UPSKILLING WILL BE REQUIRED.
By George Fitzmaurice
AI is changing the profession of software engineers. To keep pace with the growing demands of generative AI, 80% of the software engineering workforce will need to upskill by 2027, Gartner predicts. This need to upskill can be broken down into the short, medium and long term as Gartner predicts that AI will play an increasingly larger role in engineering tasks.
In the short term, AI tools will create modest increases in productivity by supporting a developer’s existing work, the consultancy said, with this process already underway. The most significant benefit in this time frame will be for senior developers.
After this stage, AI agents will start to have an impact on engineering. Allowing for the full automation of certain tasks, this evolutionary period for both the technology and the profession will see the emergence of “AI-native software engineering” in which most code is AI-generated.
Philip Walsh, senior principal analyst at Gartner.
From a skills perspective, Walsh added that natural language prompt engineering and retrieval augmented generation, or RAG, skills will become essential for software engineers at this point.
In the long term, the industry will see a rise in AI engineering as the technology becomes more powerful. Organizations will need an increasing amount of skilled developers to meet the demand for AI software, the consultancy suggested, as enterprise adoption rates continue rising.
“Building AI-empowered software will demand a new breed of software professional, the AI engineer,” Walsh said.
“The AI engineer possesses a unique combination of skills in software engineering, data science and AI/machine learning, skills that are sought after,” he added.
Gartner advised that organizations invest in AI developer platforms, as these could help to more effectively build AI capabilities. Walsh added that such an investment would require the upskilling of data engineering and platform engineering teams.
Despite speculation that AI may reduce the demand for human engineers, Walsh said the technology will not replace humans outright even as it changes the role of software engineers.
“Human expertise and creativity will always be essential to delivering complex, innovative software,” Walsh said.
While Gartner suggests the inevitability of AI upskilling, big firms are increasingly seeking engineers who can take on the challenge sooner rather than later.
In early 2024, for example, research from AWS revealed that some employers were willing to pay a 31% premium on salaries for tech workers with AI skills.
Other research from job firm Indeed
“In the AI-native era, software engineers will adopt an ‘AI-first’ mindset, where they primarily focus on steering AI agents toward the most relevant context and constraints for a given task,” said
revealed that AI skills could secure workers some of the most financially rewarding jobs, with half of the highest-paid skills related to AI and offering an average salary of $174,000. n
Cyber professionals are stressed
ISACA report shows stress is up in cybersecurity roles
ISACA's recent survey of over 1,800 cybersecurity professionals highlights a rise in stress levels due to a more complex threat landscape in 2024. The report points out challenges such as understaffed teams and difficulties in hiring qualified candidates, highlighting the growing pressures faced by those in cybersecurity roles. of cybersecurity professionals say their jobs are more stressful than five years ago leave the profession due to high stress at work
Top reasons for increased stress
1. Threat landscape is increasingly complex (81%) 2. Budget is too low (45%) 3. Hiring/retention challenges have worsened (45%)
Staff are not sufficiently trained/skilled (45%)
Cybersecurity risks are not prioritized (34%)
Source: ISACA State of Cybersecurity 2024
Outsmart AI-savvy hackers without breaking the bank
THREE PRACTICAL, BUDGET-FRIENDLY SOLUTIONS TO ADAPT TO EVOLVING THREATS.
By Michael Domingo
IT pros and cybersecurity experts are tasked with fighting the ongoing battle against tech-savvy threat actors, and AI has only made it more difficult to keep fighting the good fight. If the last few years are any indication, hackers will find more innovative ways to infiltrate our systems into 2025 and beyond.
Cybersecurity teams can turn to several AI-enabled solutions, but those with limited budgets may need to face their hackers with minimal resources and a lot of due diligence.
THREATS FROM INSIDE AND OUT
It’s no secret that hackers are getting better at cyberattacks and infiltrating businesses and computer systems. Before we talk solutions, let’s look at some interesting attacks on companies that leveraged AI in the last few years.
French users were the targets of malware used in an email hacking campaign, and that malware may have been written with the assistance of generative AI. One possible giveaway was that the code was thoroughly commented on, which is uncommon with human-written code.
AI was also suspected of being used in an Activision Blizzard breach in December 2022. The breach targeted several employees through an SMS-based phishing campaign that tricked them into giving up their two-factor authentication code. The incident, which resulted in a hacker gaining access to internal systems, could have sidestepped Microsoft’s eventual acquisition of the company at the end of 2023.
T-Mobile can be considered a dart board for hackers. The company was reported to have been breached by an AI-powered API, which resulted in 37 mil-
Getty Images
Crush Your Marketing Challenges with
Leadstrike
MASTER OF DATA-DRIVEN LEAD GENERATION, HARNESSING THE POWER OF TARGETED CONTENT AND SYNDICATION TO DELIVER HIGH-VALUE LEADS.
THE DISPLAY AD DYNAMO, CONTROLLING EVERY IMPRESSION AND CLICK TO DRIVE UNSTOPPABLE TRAFFIC AND BRAND VISIBILITY.
THE CUSTOM CONTENT CREATOR, CRAFTING IRRESISTIBLE STORIES AND STRATEGIES THAT WIN MINDS AND HEARTS
THE WEBINAR WIZARD, CONJURING IMMERSIVE ONLINE EXPERIENCES THAT ENGAGE, INFORM, AND CONVERT YOUR AUDIENCE WITH EASE.
lion records being stolen, and that was just one incident. More recently, the company agreed to pay an FCC fine for four breaches, the first of which occurred in 2021.
The year 2024 alone has been a record year for breaches that have affected all types of businesses, from telecom to health care to education to local governments. In many of those incidents, AI is likely to have played a part, especially in the sophistication and appearance of these tools to seem real through deepfakes, voices, videos and seemingly legitimate and well-written business memos.
The ease with which hackers are gaining access to data is a call to cybersecurity professionals to practice the utmost due diligence, and that means companies should copycat AI-using fraudsters by using AI to view their attacks from the hacker’s perspective.
“For the strongest defenses, the future lies in the ability to adopt the perspective of attackers, who will continue to rely more heavily on AI,” writes Dilip Bachwani, chief technology officer at Qualys, in an opinion on Dark Reading. “By analyzing internal data alongside external threat intelligence, AI can essentially map out our digital landscape from an attacker’s point of view,” Bachwani says.
SECURE COMPUTING STARTS WITH THESE 3 SOLUTIONS
Due diligence to stay on top of threats is key, and so is prioritizing having the right tools to do so. The good news is there are good ways to bulletproof your systems without breaking the bank. Experts agree that these three solutions outlined below are among the most effective solutions to keep the wolves at bay on a budget. These solutions are practical and can adapt to evolving threats:
Threat detection and response systems: AI has provided a means for modern threat detection systems to process vast amounts of data and detect patterns to function as a cybersecurity army combing that data. Modern TDRs can respond in real-time as soon as threats have been detected. AI-based solutions that use the power of large language models can learn from incidents to
improve and evolve as hackers enhance their capabilities.
You’ll want a TDR that mounts a proactive defense, one that can respond immediately to zero-day exploits, ransomware and other attacks and one that gives AI the task of sifting through humongous data sets to flag any odd activity.
It’s a tried-and-true solution, and one shining example is the Qatar World Cup 2022, where a TDR was used to flag a hacker planting several hacking tools on its networks.
Zero-trust architecture with AI-driven access control: Here’s a cybersecurity model that allows organizations to
engineer working for Capitol One took advantage of a misconfigured web application firewall to access Capital One’s AWS servers, stealing data on 100 million people. Fortunately, the company had a TDR that helped to thwart the threat before data could be leaked publicly, but an AI-aided zero-trust solution could have flagged the threat earlier in the process once it detected the engineer’s seemingly normal meanderings around the servers.
Automated patch management and vulnerability detection: Companies with rudimentary cybersecurity best practices know about this one – or should – and even companies armed to the
The ease with which hackers are gaining access to data is a call to cybersecurity professionals to practice the utmost due diligence, and that means companies should copycat AI-using fraudsters by using AI to view their attacks from the hacker’s perspective.
deal with workforces, remote devices and cloud environments that take remote computing into account. With zero trust, no one inside or outside of the network is trusted by default. AI takes it one step further by continuously monitoring and authenticating users and devices by analyzing user behaviors as they gain entry, while using the system and working with data, and even the methods by which they might exit. AI-driven access control also takes into account how users might move laterally across a network, which is a common method of disguising one’s actions and can even restrict such access.
Insider threats like this one are difficult to catch early on and can be especially difficult if some of the information exfiltrated has been manually processed, like stealing credit card information by simply making copies on a copy machine. This, unfortunately, is untraceable if the copy machine used is air-gapped. Even so, insiders who use a computing device to perform a hack will often leave some sort of electronic trail. That’s where AI-aided zero-trust solutions can help with your cybersecurity due diligence.
A former Amazon Web Services
teeth with cybersecurity solutions have safeguards like this in place by default. Hackers will jump on system vulnerabilities once they’re discovered, and AI is likely to help them develop a number of exploits for those flaws.
AI-enabled patch management systems can rapidly scan systems, prioritize critical vulnerabilities and ensure patches are applied to keep exploits at bay. In addition, AI can automate the process in accordance with company policies. If implemented with a good corporate policy in place, it’s a proactive approach with minimal effort and one less security issue to worry about.
WHY THESE 3?
Of course, there are other AI-enabled solutions up and down the cybersecurity tools ecosystem, But not many companies have unlimited budgets. That is why focusing on AI-powered threat detection, zero-trust architecture and automated patch management has to be the bare minimum that organizations need to maintain cybersecurity resilience. It is also critical to stay ahead of AI-savvy hackers and put them on notice if they dare try to infiltrate your systems. n
Password trends: Should you feel safe with that complicated password?
SECURITY EXPERTS SAY PASSWORD COMPLEXITY CAN INADVERTENTLY FOSTER COMPLACENCY AND UNSAFE PRACTICES.
By George Fitzmaurice
Overly complex passwords are not just ineffective but dangerously insecure, according to the latest National Institute of Standards and Technology (NIST) guidelines
Humans often choose easily guessed passwords for the sake of memorability, NIST said, meaning many online services have introduced rules that demand a certain level of complexity.
For example, many services require users to create passwords that contain a mix of character types, such as numbers, uppercase letters and symbols.
“However, analyses of breached password databases reveal that the benefit of such rules is less significant than initially thought, and the impacts on usability and memorability are severe,” NIST said.
NIST cites research showing that users respond in predictable ways to password composition requirements, therefore undermining the intended security payoff.
“For example, a user who might have chosen ‘password’ as their password would be relatively likely to choose ‘Password1’ if required to include an uppercase letter and a number or ‘Password1!’ if a symbol is also required,” NIST said. Complex passwords also introduce a new vulnerability As they are far less memorable, NIST warned, noting users
are more likely to write them down or store them electronically in an unsafe way.
NIST instead recommends an approach based primarily on password length, though it was keen to emphasize that many attacks associated with passwords are affected by neither complexity nor length, such as phishing or social engineering.
“The complexity of user-chosen passwords has often been characterized using the information theory concept of entropy. While entropy can be readily calculated for data with deterministic distribution functions, estimating the entropy for user-chosen passwords is challenging, and past efforts to do so have not been particularly accurate,” NIST said.
“For this reason, a different and somewhat more straightforward approach based primarily on password length is presented herein,” it added.
As the size of a hashed password is independent of its length, there is no reason to prohibit the use of lengthy passwords, NIST said, though extremely long passwords could take longer to hash. Users should make their passwords as lengthy as they want, it added.
NIST concluded that length and complexity requirements beyond those it
recommends only serve to increase user frustration and act counterproductively.
ARE PASSWORDS ON THEIR WAY OUT?
This latest move places yet more pressure on users with regard to password security. Some industry stakeholders insist passwords are an increasingly antiquated security measure that can be cracked in a “matter of minutes.”
Research from Kaspersky pointed to this in a 2024 report, which found 45% of passwords could be guessed in under 60 seconds, based on a sample size of 193 million compromised passwords.
Little wonder then that some big names are calling to get rid of them altogether.
A host of big tech firms including Microsoft, Apple and Google have been moving toward a passwordless future for several years now. In particular, these companies have been exploring potential alternatives such as passkeys.
Oracle Chief Technology Officer Larry Ellison claimed staff at the cloud computing giant won’t be using passwords a year from now, owing to their fundamental insecurity.
“The idea that we use passwords is a ridiculous idea. It’s obsolete. It’s very dangerous,” Ellison said. n
DevSecOps teams are ramping up the use of AI coding tools, but they’ve got concerns
WITH THE VOLUME OF AI-GENERATED CODE GROWING, DEVSECOPS TEAMS ARE QUESTIONING THEIR TESTING POLICIES AND PROCESSES.
By Emma Woollacott
While a plethora of organizations globally are now using AI in their software development processes, new research shows that DevSecOps teams are worried about the growing array of security risks.
In a recent survey by Black Duck Software, 9-in-10 developers reported using AI coding tools in their daily workflow and highlighted the marked
benefits of integrating AI within the development lifecycle
The sectors most enthusiastic about AI-generated code development are technology, cybersecurity, fintech, education and banking/financial, the study found.
Even in the nonprofit sector, traditionally less of an early adopter, at least half of organizations surveyed reported that they were using AI.
Yet despite the excitement surrounding AI coding tools, developers and software engineers have reported serious issues. Two-thirds of respondents said they’re growing increasingly concerned about the security and safety of AI-generated code
“AI is a technology enabler that should be invested in, not feared, so long as the proper guardrails are being prioritized,” said Jason Schmitt, CEO of Black Duck.
Getty Images
“For DevSecOps teams, that means finding sensible uses to implement AI into the software development process and layering the proper governance strategy on top of it to protect the heart and soul of an organization – its data.”
When it comes to security testing about one-third of DevSecOps cited the sensitivity of the information being handled, industry best practices and easing the complexity of testing configuration through automation, as main priorities.
Most survey respondents (85%) said they had at least some measures in place to address the challenges posed by AI-generated code, such as potential IP, copyright, and license issues that an AI tool may introduce into proprietary software.
However, fewer than a quarter said they were “very confident” in their policies and processes for testing this code.
DevSecOps teams are being hampered by testing hurdles
The big conflict here appears to be security versus speed considerations,
with around 6-in-10 reporting that security testing significantly slows development. Half of the respondents also said that most projects are still being added manually.
Another major hurdle for teams is the dizzying number of security tools in use, the study noted. More than 8-in-10 organizations said they’re using between 6 and 20 different security testing tools.
are “noise” - false positives, duplicates or conflicts – which can lead to alert fatigue and inefficient resource allocation.
“While there’s a clear trend toward automation and integration of security into development processes, many organizations are still grappling with noise in security results and the persistence of manual processes that could be streamlined through automation,” wrote
+80% of organizations are using between 6 and 20 different security testing tools.
Respondents noted that this growing array of tools makes it harder to integrate and correlate results across platforms and pipelines and to distinguish between genuine issues and false positives.
Indeed, 6-in-10 reported that between 21% and 60% of their security test results
Connecting Buyers
and Sellers
Black Duck’s Fred Bals in a blog post.
“Moving forward, the most successful organizations will likely be those that can effectively streamline their tool stacks, leverage AI responsibly, reduce noise in security testing, and foster closer collaboration between security, development and operations teams.” n
Through the power of well-known brands, Future B2B delivers an unparalleled client and audience experience across newsletters, advertising, lead generation, content creation, webinars and live events.
B2B is a global platform for specialist media with scalable, diversified brands. We connect people to their passions through the high-quality content we create, the innovative technology we pioneer and the engaging experiences we deliver.
Our Services
Our established brands, like SmartBrief, ActualTech, and ITPro deliver expert-led niche newsletters, cutting-edge advertising solutions, pipeline-enhancing lead generation, and unforgettable live and virtual events.
Our turnkey services are crafted to expand your market reach, supercharge your lead nurturing efforts, and captivate your clients. Future B2B’s hyper-focused brands such as Mix, Twice, Radio World and others offer uniquely authoritative advertising opportunities to engage niche audiences with specialized content.
how we can take your business to the next level. Learn more at: https://www.futureb2b.com/#get-in-touch
How hackers are using legitimate tools to distribute phishing links
A
NEW ERA OF PHISHING EXISTS WHERE THREAT ACTORS HAVE BECOME INCREASINGLY ADEPT AT CONCEALING THEIR MALICIOUS LINKS.
By Solomon Klappholz
As both security tools and employees have become more astute at detecting traditional phishing attacks, threat actors have turned to manipulating trusted platforms to distribute phishing links hidden in seemingly legitimate URLs.
In one example of this approach, a report from Barracuda Networks published in September 2024 detailed a rise in phishing attacks leveraging trusted content creation and collaboration platforms.
These platforms are particularly popular in the education sector, which is a growing target for threat actors, and are also commonly used by businesses and creative professionals.
Threat analysts at Barracuda identified several phishing attacks using one online collaboration tool “widely used in educational settings” that allows students to create and share virtual boards
where they can organize school content.
Hackers manipulated the platform’s post-wall function to send emails with embedded phishing links. Since the messages came from a trusted platform, recipients were less likely to scrutinize these emails as closely as they might when receiving a message from an unknown, external entity, the report noted.
The platform was also used to host voicemail phishing links, where users are taken to a separate link and then redirected to a spoofed Microsoft login page designed to harvest users’ login credentials.
Threat actors were also found leveraging a popular graphic design platform. The email sent from the service appeared to be identical to a legitimate file-sharing invitation from Microsoft 365 Researchers at Barracuda offered a
third example of a file sharing and tracking platform mainly used by business professionals, finding several fake “file share” notifications within emails.
Saravanan Govindarajan, manager of threat analysis at Barracuda Networks, concluded that the rise in volume of these attacks reveals a growing trend away from traditional phishing tactics.
“The increase in phishing attacks leveraging trusted content creation and collaboration platforms highlights a shift in cybercriminal tactics towards the misuse of popular, reputable online communities to implement attacks, evade detection and exploit the confidence that targets will have in such platforms.”
HACKERS ARE LEVERAGING SECURITY TOOLS AGAINST ORGANIZATIONS Research shows the obfuscation of
phishing links has also involved leveraging security tools for nefarious purposes, where attackers have been found turning URL protection on itself to covertly distribute malicious links.
Barracuda Networks Threat Analyst Saravanan Mohankumar, threat analyst at , detailed how from mid-May onwards he and his team had observed threat actors using three different URL protection services to mask their phishing links. URL protection tools copy and rewrite hyperlinks used in emails embedding the original URL in the new, rewritten version.
“When the email recipient clicks on the “wrapped” link, it triggers an email security scan of the original URL. If the scan is clear, the user is redirected to the URL,” Mohankumar explained.
The report suggests attackers could have been able to gain access to these
tools after compromising the accounts of legitimate users, and if they have entry to the account, they can identify which URL protection service the victim has access to.
To activate the URL wrapping function, the attacker then uses an outbound email sent to themselves using the compromised account, and the victim’s security tool will automatically rewrite the URL using their own URL protection link.
The threat actor can then use that link to conceal malicious URLs in their ongoing social engineering campaign, the report concludes.
Mohankumar warned these services, which are provided by trusted brands have been used to target hundreds of companies, speculating the real figure is likely far higher.
Speaking to ITPro, Neal Bradbury, chief product officer at Barracuda Networks,
outlined why novel attack vectors like these are difficult to protect against for security vendors, and mitigation will rely on better security awareness training for staff.
“It’s really difficult to detect a lot of these [attacks] as they continually evolve. What’s happened [here] is that the links themselves are valid, and so what they [attackers] have figured out how to do is host something malicious on a OneDrive link or an Evernote link, for example,” he explained
“What we really need to do is follow the link and if we can’t do that as a security vendor, what we’re basically doing is projecting it to the user and saying ‘look, we have gone as far as we possibly can down this path or link. Caution, you may want to find out if it’s actually from a user that you trust’, and a lot of that comes down to training.” n
Smart devices, smarter hackers?
HOW BUSINESSES CAN MANAGE SECURITY FOR AI AND THE IOT.
By Isabel Kunkle
What if your house wanted to kill you?
It’s a pretty classic horror movie question, one that video games like System Shock and Portal and movies from “Demon Seed” to “2001: A Space Odyssey” have given a technological edge. When we give machines control of our environment, we make ourselves vulnerable, and what might an inhuman intelligence do with that?
As we turn to the internet of things to help with manufacturing and utilities, to let us monitor our homes and to help us age in place, some of our concerns haven’t seemed so far-fetched. The rise of AI, especially generative AI, has produced more worry – and not all of it is unjustified. When AI and the IoT intersect, major problems could and do arise.
HAL 9000 probably won’t be the culprit, though. Hal from down the block – or across the Atlantic – is the real threat.
AI AND THE IOT: AN EXPLOSIVE COMBINATION
The internet of things makes a tempting target for hackers by itself: The data that lets your house know what temperature you like or keeps power grids humming smoothly is also extremely valuable. On a larger scale, hostile actors could seize
control of power plants, transportation systems or hospitals, potentially threatening people’s lives. Private citizens might demand ransoms, while state actors could take down infrastructure as Russia allegedly did in Ukraine. Now add AI.
Not only does the technology generally make hacking more efficient and give social engineering a boost, but it also opens entirely new pathways for IoT attacks. Malicious actors could “poison” a model’s training data to tilt behavior in a particular direction, enter inputs that break the AI behind a device and grab potentially sensitive information by accessing the training data in a model inversion attack
In industrial IoT, complex systems and a broader attack surface mean that a single breach could cause a cascade that affects multiple elements of a business or community. One such attack shut down operations at Taiwan Semiconductor Manufacturing Co., costing it an estimated $255 million in revenue.
TWO GREAT TASTES
Not all combinations of AI and the IoT lead to crashing plants and stolen
passwords. AI can actually help improve cybersecurity in general, and particularly the security of IoT systems, with a constant presence and an “eye” for patterns that no human can match.
A program could process vast amounts of data, analyzing it for potential threats like unauthorized access attempts. In case a breach does succeed, AI could shift encryption in real time, responding to sensitivity and traffic to ensure malicious actors don’t get what they’re looking for even if they can grab the files themselves.
On a more basic level, automatic updates could keep homes, businesses and other organizations equipped with the latest defenses against everchanging vulnerabilities and tactics.
BUSINESS RESPONSIBILITIES FOR AI AND THE IOT
As is so often the case when dealing with technology, the real horror comes from people – both those who willfully misuse AI and the IoT for their gain or national interests and those who fail to implement necessary security. Businesses have to
develop comprehensive, proactive strategies that address potential threats before they emerge and keep reevaluating them as the security landscape evolves and new software or hardware appears. Countries need to enact regulations like the EU’s AI Act, which focuses on data protection, accountability and other crucial elements of AI safety.
Finally, individuals must practice good cyberhygiene, knowing and following best practices to ensure they’re not the weak link.
CONCLUSION
We’ve given machines a lot of power over our lives, and that’s not likely to stop any time soon. The IoT really is convenient for individuals and organizations, AI really can make certain processes more efficient and less costly, and we don’t tend to give up on new technology once we’ve started using it.
The real issue is how we can make AI and the IoT safe and reliable, and that requires a lot of human ingenuity and cooperation. n
Machine learning 101: What is it and why is it important
NO LONGER CONFINED TO THE WORLD OF SCIENCE FICTION, MACHINE LEARNING REPRESENTS A NEW FRONTIER IN TECHNOLOGY.
By Bobby Hellard
Machine learning is a buzz word we heard all throughout 2024 and will continue to hear in 2025. That being said, some are still asking what is machine learning and why is it important. At its basics, machine learning is the process of teaching a computer system to make predictions based on a set of data. By feeding a system a series of trialand-error scenarios, machine learning researchers strive to create AI systems that can analyze data, answer questions and make decisions on their own.
Machine learning often uses algorithms based on test data, which assist with inference and pattern recognition in future decisions. This removes the need for explicit instructions from humans that traditional computer software requires.
WHAT IS MACHINE LEARNING?
Machine learning relies on a large amount of data, which is fed into algorithms in order to produce a model off of which the system predicts its future decisions. For example, if the data you’re using is a list of fruits you’ve eaten for lunch every day for a year, you would be able to use a prediction algorithm to build a model for which fruits you were likely to eat when in the following year.
The process is based on trial and error scenarios, usually using more than one algorithm. These algorithms are classed as linear models, nonlinear models or even neural networks. They will be ultimately dependent on the set of data you’re working with and the question you’re trying to answer.
WHAT ARE THE TYPES OF MACHINE LEARNING ALGORITHMS?
Machine learning algorithms learn and improve over time using data, and do not require human instruction. The algorithms are split into three types: supervised, unsupervised and reinforcement learning. Each type of learning has a different purpose and enables data to be used in different ways.
SUPERVISED LEARNING
Supervised learning involves labeled training data, which is used by an algorithm to learn the mapping function that turns input variables into an output variable to solve equations. Within this
are two types of supervised learning: classification, which is used to predict the outcome of a given sample when the output is in the form of a category, and regression, which is used to predict the outcome of a given sample when the output variable is a real value, such as a ‘salary’ or a ‘weight’.
An example of a supervised learning model is the K-Nearest Neighbors, or KNN, algorithm, which is a method of pattern recognition. KNN essentially involves using a chart to make an educated guess about an object’s classification based on the spread of similar objects nearby.
In the chart below, the green circle represents an as-yet unclassified object,
which can only belong to one of two possible categories: blue squares or red triangles. To identify what category it belongs to, the algorithm will analyze what objects are nearest to it on the chart – in this case, the algorithm will reasonably assume that the green circle should belong to the red triangle category.
UNSUPERVISED LEARNING
Unsupervised learning models are used when there are only input variables and no corresponding output variables. It uses unlabelled training data to model the underlying structure of the data.
There are three types of unsupervised learning algorithms: association, which is extensively used in market-basket analysis; clustering, which is used to match samples similar to objects within another cluster; and dimensionality reduction, which is used to trim the number of variables within a data set while keeping its important information intact.
REINFORCEMENT LEARNING
Reinforcement learning allows an agent to decide its next action based on its current state by learning behaviors that will maximize a reward. It’s often used in gaming environments where an algorithm is provided with the rules and tasked with solving the challenge in the most efficient way possible. The model will start out randomly at first, but over time, through trial and error, it will learn where and when it needs to move in the game to maximize points.
In this type of training, the reward is simply a state associated with a positive outcome. For example, an algorithm will be “rewarded” with a task completion if it is able to keep a car on a road without hitting obstacles.
WHAT IS MACHINE LEARNING USED FOR?
Organizations can pull in information from multiple sources. Whether that is from changing customer behaviors or even the actions of their staff members, the volume of data available is almost limitless. However, the sizes of these potential datasets pose a problem when it comes to analyzing and, ultimately, using the information.
This is where machine learning comes into play; it can pull that data together and find the patterns or the information it needs to make predictions. An easy example of this is in medical analysis where human eyes would need years and years to find all the necessary patterns from a thousand MRI scans. A device with ML software can be trained on that data and discover the key details in a matter of seconds – though only if the information has a correct label.
EVERYDAY USES OF MACHINE LEARNING
One of the most famous examples of machine learning is a service most people use every day: Google Search Web searching with Google harnesses the power of many different ML algorithms, some of which can also analyze and read the text that is typed into it.
More algorithms are then used to tailor the search results to suit the user’s previous searches – and hopefully make it more personalized for them.
zation seeks to employ more diversely, for example, but only uses resumes belonging to its present workers as the test data, then the machine learning application may inadvertently favor candidates that look like existing staff. Some governments have been spooked by this form of machine learning and it has caused a number to implement regulations that aim to limit its use. In the UK, the Cabinet Office’s Race Disparity Unit and the Centre for Data Ethics and Innovation teamed up to research potential bias in algorithmic decision-making. The US government also decided to pilot diversity regulations for AI research that minimize the risk of racial or sexual bias in computer systems.
WHAT MACHINE LEARNING CAN AND CANNOT DO
First, to understand what machine learning can and can’t do, we must discard most of our pop culture references and largely ignore what film and TV may
ML algorithms are split into three types: supervised, unsupervised and reinforcement learning. Each type of learning has a different purpose and enables data to be used in different ways.
For example, the term “Java” could simply drag up results for the programming language, however, if the user has a search history full of coffee products, that’s more likely to be suggested to them.
For businesses, ML use cases range from employee monitoring software to endpoint security platforms. Security software is often built with ML that can analyze attack patterns and use the information to detect potential threats early on.
MACHINE LEARNING DATA BIAS
Bias in data has been a hot topic of debate for many years, and may become more problematic as we expand the use of ML technology into public-facing systems and services.
Bias is not always easy to spot, and can exist in the data itself. If an organi-
have suggested. Some mindfulness is also needed because you might be both disappointed at what it can just about do right now and amazed by how it is being used.
WHAT CAN MACHINE LEARNING BE USED
• Voice recognition
FOR?
• Text-to-speech transcription
• Provide recommendations depending on the search term