![](https://assets.isu.pub/document-structure/241015020515-7c44ae45711de2922797ada8c1e68dc5/v1/6c1f76e818089e6292bd7c6368c485f0.jpeg)
![](https://assets.isu.pub/document-structure/241015020515-7c44ae45711de2922797ada8c1e68dc5/v1/6c1f76e818089e6292bd7c6368c485f0.jpeg)
SECURING AI: A COLLECTIVE RESPONSIBILITY
EXECUTIVE SUMMARY
As the adoption of AI continues to accelerate, there have been growing concerns about the security risks of AI. In this paper, we discuss what the security of AI entails, and how AI security relates to ongoing international conversations about the safety and trustworthiness of AI. We also explore how AI security differs from traditional IT security, and the additional risks and vulnerabilities that arise due to the inherently adaptive nature of AI.
We then explore how these risks can be addressed through a multi-stakeholder effort – at both the organisational and ecosystem level – and discuss the roles that different players in the AI ecosystem should play to secure the adoption of AI. To move forward on AI security, we further propose four key pillars, or lines of effort, that governments, industry, experts, and academia can work on: (a) building robust and practical AI governance frameworks; (b) developing technical tools and capabilities that can support secure adoption of AI; (c) investing in research and development to identify emerging threats to the security of AI, and ways we can address them; and (d) supporting talent, innovation, and growth (TIG) programmes that can support the growth and development of the secure AI ecosystem.
ABOUT THIS PAPER
Many of us are familiar with the potential benefits for our economy and society that Artificial Intelligence (AI) systems will bring. AI has massive potential to drive efficiency and innovation – from healthcare to e-commerce and the delivery of government services. For cybersecurity, we also expect it to lighten the workload of analysts and operators, and to help combat emerging threats.
However, we must be clear eyed that the adoption of AI, including both traditional and generative AI, will exacerbate existing security risks (e.g. widened attack surface), and introduce new ones (e.g. harmful outcomes from manipulated models). While there has been international focus on the development and deployment of AI, its security and vulnerabilities are less well-understood. Similar to existing digital systems, malicious attacks on AI can cause users to lose confidence in the technology, and this may limit the value that they can extract from it.
This is why Singapore’s National AI Strategy (NAIS) 2.01 includes our plans to foster a Trusted Environment that protects users and facilitates innovation. We want AI to be developed and deployed in a safe, trustworthy and responsible manner, so that people have the confidence that their interests are protected when interacting with AI. To do this, we must ensure that we are well-equipped to mitigate AI’s risks. This includes preventing AI systems from being used in malicious or harmful ways, and securing them against adversarial attacks.
Upholding the security of AI systems is a vital component in building up our Trusted Environment. This paper discusses the security of AI, and highlights that it can only be effectively addressed as a community effort.
WHAT IS “SAFE” VS. “SECURE” AI?
For AI to be trusted by users, it needs to be both safe and secure. These terms are often used to describe what AI should be, and refer to different objectives.
AI Safety refers to the development and deployment of AI that minimises harm or negative consequences. AI systems should “not under defined conditions, lead to a state in which human life, health, property, or the environment is endangered”2. AI safety involves broader considerations such as reliability, accountability and ethics, to promote beneficial outcomes to human well-being and society. Risks typically discussed include bias in AI systems which can lead to unfair treatment in areas like healthcare, job recruitment and financial lending, the use of AI to generate deepfakes, or to contribute to the spread of mis/disinformation.3
AI Security refers to ensuring the confidentiality, integrity, and availability (CIA) of AI systems. It can be seen as a subset of AI safety, or an overlapping concept. The CIA of AI systems can be threatened by adversarial attacks that seek to exfiltrate data, manipulate system behaviour, or cause damage or disruption to its operations. Relatedly, we are also concerned about the resilience of AI, as the widespread adoption of a few AI models and systems could lead to disruptions on a broad scale across various sectors. Secure AI is hence about the organisational and technical guardrails to prevent and counteract attacks, including “protection mechanisms that prevent unauthorised access and use.”4
This paper focuses on advancing discussions on AI security, which is still in the relatively early stages of maturity and awareness. The considerations laid out here would nonetheless contribute to AI safety and preserve and promote users’ trust in AI.
AI systems should “not under defined conditions, lead to a state in which human life, health, property, or the environment is endangered”.
2 https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.100-1.pdf 3 https://assets.publishing.service.gov.uk/media/6655982fdc15efdddf1a842f/international_scientific_report_on_the_safety_of_ advanced_ai_interim_report.pdf 4 https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.100-1.pdf
5
WHAT DOES AI SECURITY COMPRISE,
AND HOW DOES IT DIFFER FROM CLASSICAL CYBERSECURITY?
As the field of AI security continues to develop, we expect our understanding of this space (including the attendant risks, threats and vulnerabilities) to evolve. Ongoing international research efforts continue to uncover new vulnerabilities and mitigation methods. As such, this section provides a discussion on key risks (non-exhaustive), and organisations should continue to keep abreast on the latest developments in AI security.
There is consensus that AI is fundamentally a type of software, supported by hardware, software, and data components. This means that AI is vulnerable to classical cybersecurity threats, including supply chain attacks, cyber-attacks, or outages (e.g. electrical) in the underlying infrastructure. Widespread adoption of AI, especially shadow AI5, has in turn expanded the attack surface for such threats. As such, cybersecurity principles such as secure by design6 and secure by default7 must continue to apply and form the baseline for AI security
Nonetheless, AI has unique characteristics that present new and unique challenges for security. For example:
Some AI systems, specifically Generative AI models, are adaptive, probabilistic, and generative by nature. They can take in user prompts and provide contextualised responses. This is unlike classical software, which generally have predefined data flows and processes, as well as set algorithms and rules. These features make generative AI attractive because they enhance usability. However, it also introduces complexity and vulnerabilities – for example, adversarial attacks may seek to poison training data or leverage prompt injection to induce model poisoning, which leads to undesired and potentially harmful outcomes. Given the innate challenges in the explainability of AI models, as well as their adaptive nature, it can be difficult to detect and identify such attacks or triage the cause
AI models are also often trained on large volumes of data to ensure that they work well. Depending on the use case, there may be a risk that malicious actors may be able to gain privileged access to confidential and/or sensitive data through an AI model, such as by “reverse engineering” the AI model to expose its training data. As such, if data security and privacy controls are not well managed upfront, we expect increased risk of data breaches or leakage when user prompts (adversarial or otherwise) cause the model to output sensitive or private information.
As such, the use of AI requires new considerations that stakeholders should familiarise themselves with (see Table 1 for key examples).
Table 1: New considerations to address for AI systems, as compared to classical digital systems 8 https://developer.ibm.com/articles/cc-machine-learning-deep-learning-architectures/
Design
High barrier to adoption and use. Traditionally, only developers with technical skills and knowledge (e.g. coding) will develop and deploy new software/ systems.
Lowered barrier to adoption and use. Increased backend automation has made it easier for non-technical experts to design and build their own AI applications. However, these users may not fully understand or appreciate the underlying complexity of AI/ML techniques and may not be aware of the potential security risks involved.
Development
Established supply chain risk management frameworks. There are relatively mature frameworks to aid users with understanding and managing supply chain risks of classical software.
Structured/controlled datasets. System data that affect the behaviour of a digital system are typically structured, relatively static and limited to internal/ controlled datasets.
Limited understanding of AI supply chain risks as the adoption of AI is still nascent. For AI systems, many thirdparty and/or open-source AI solutions may contain hidden vulnerabilities that may be difficult to detect or mitigate with existing security controls.
Large and varied training datasets that impact system behaviour. Training data can come from a wide variety of sources, including external sources (e.g. web-scraping of user-generated content, through APIs) in significant volumes, and it is challenging to verify/sanitise this comprehensively.
Structured and more explainable rule-based systems. Structured programming with more well-defined logic and limited parameter space, as well as modular architectures with distinct functionalities and more clearly defined interactions. More straightforward to test and verify that a system is working as expected, and that no illegal modifications has occurred.
Complex system
architecture Multi-layered architectures8 housing models with billions to trillions of parameters, creates a larger attack surface and more difficulty in detecting and pinpointing specific vulnerabilities within model and system components.
Need to raise awareness and competency on security risks. Staff working with/on AI systems should be provided with proper training and guidance so that they are aware of the security risks involved with AI systems.
As unknown/hidden vulnerabilities may only emerge after an AI system has been deployed, additional due diligence and emphasis must be placed on securing and monitoring thirdparty AI models and system components throughout the life cycle of an AI system.
The veracity and reliability of training datasets is crucial as illegal tampering may affect the behaviour and output of AI models. There needs to be processes and controls in place to protect against the malicious acts to tamper with training data, such as using verified data sources where possible, and setting stringent input data validation rules and conditions.
The complexity of AI systems means no single security validation or verification method would be able to conclusively verify that an AI system is behaving as expected. Stringent monitoring and anomaly detection at multiple layers of the solution is therefore required. Organisations should also consider ongoing evaluation, including threat scenario modelling, red-teaming and penetration testing, to uncover new vulnerabilities and potential areas of adversarial attacks.
Classical digital systems New considerations needed for AI systems
Deployment
Systems behave in a deterministic manner, and can be comprehensively validated and verified, given the reliance on explicit code and rulebased systems. Anomalies can be detected, and comprehensive testing of potential failure modes can be accounted for.
Systems present more of a black box There are attendant concerns about the reproducibility and explainability of outcomes. Given the wide spectrum of potential AI outcomes (especially emergent behaviour for generative AI), it can be difficult to identify failure modes9 or which outcomes are anomalous/ undesired.
What this means for the security of AI systems
The black box nature of AI models means that it is not possible for traditional security testing methods to be able to exhaustively and comprehensively validate and verify that an AI system is behaving as expected.
More robust methods of testing are essential to ensure the integrity of AI systems. In addition, organisations should consider incorporating monitoring processes, and further protect against unknown or unintended output with safety guardrails and human-in-the-loop processes.
Operations and Maintenance
Static, fixed systems
Classical digital systems which require structured data and operates on fixed instructions.
Dynamic systems AI can adapt, learn, and apply new information – retraining itself in the process. In many AI workflows, datasets and models are continuously updated throughout its lifecycle.
As AI models may be continuously retrained, greater emphasis is needed on continued monitoring and secure update processes, to prevent malicious alterations of model behaviour during its life cycle.
Organisations will also need to adopt a secure-by-design approach to updates and continuous learning.
Figure 1 highlights five key risks of concern in AI security, as well as examples of the security threats that can lead to them. This is not an exhaustive list, and more information on the potential threats to AI across its lifecycle can be found in resources such as the OWASP Top 10 for AI/ML and LLM, as well as the MITRE ATLAS database.
Our starting point is that these AI security threats can only be addressed when all stakeholders across the AI value chain do their part. For example, while important, guidelines and controls at the development stage will be less effective if system owners do not take the necessary steps to secure their data or supply chain. Without the effective market incentives and policy guidance, it would also be challenging to influence consumer behaviour. This paper therefore seeks to discuss how the wider AI and technology ecosystem can be mobilised to take action to secure AI.
Data loss/leakage
Loss/leakage of model parameters
![](https://assets.isu.pub/document-structure/241015020515-7c44ae45711de2922797ada8c1e68dc5/v1/40023ac63f6a20cbb02a6232f7ed32fc.jpeg)
![](https://assets.isu.pub/document-structure/241015020515-7c44ae45711de2922797ada8c1e68dc5/v1/40023ac63f6a20cbb02a6232f7ed32fc.jpeg)
Undesired model behaviour
Unauthorised access to model, backend, enterprise environment
![](https://assets.isu.pub/document-structure/241015020515-7c44ae45711de2922797ada8c1e68dc5/v1/40023ac63f6a20cbb02a6232f7ed32fc.jpeg)
![](https://assets.isu.pub/document-structure/241015020515-7c44ae45711de2922797ada8c1e68dc5/v1/40023ac63f6a20cbb02a6232f7ed32fc.jpeg)
![](https://assets.isu.pub/document-structure/241015020515-7c44ae45711de2922797ada8c1e68dc5/v1/40023ac63f6a20cbb02a6232f7ed32fc.jpeg)
Loss/slow down of service
![](https://assets.isu.pub/document-structure/241015020515-7c44ae45711de2922797ada8c1e68dc5/v1/75dcbdcd77a18ad489ef9cebfef58f20.jpeg)
![](https://assets.isu.pub/document-structure/241015020515-7c44ae45711de2922797ada8c1e68dc5/v1/08164accb8eb2aead04142928427beb9.jpeg)
Overriding system restriction; ‘jailbreaking’
Obfuscating input/output (e.g. concatenating untrusted input onto a trusted prompt)
Bypassing “business logic”
Overloading context to reduce protection
Figure 1: Overview of security risks to AI systems and examples of threat vectors
Example: Data poisoning –spam campaign on a search engine’s generative AI feature
In May 2023, Google launched a Search Generative Experience (SGE) feature, which provides AI-generated quick summaries for search queries, including recommendations for other related sites. However, the following year, researchers reported that the SGE feature appeared to be recommending malicious and spam sites within its conversational responses. According to cybersecurity news outlet BleepingComputer, several listed scam sites promoted by the SGE feature could have been part of a single Search Engine Optimisation (SEO) poisoning campaign. Some sites appeared to collect users’ personal information or push unwanted browser extensions that could perform malicious behaviour (e.g. search hijacking). While Google explained that they “continuously update their systems and ranking algorithms to protect against spam”, this was a “game of cat and mouse”, as spammers could constantly evolve their tactics to evade detection.
This incident highlights how data poisoning attacks against generative AI systems could enable phishing and other malicious cyber activities (e.g. spam). Users of generative AI tools should be vigilant – they should not blindly trust the output of AI tools, and should verify recommended sites before visiting them.
Example: Prompt injection attacks – a “zeroclick worm” targeting GenAIpowered email assistants
In March 2024, researchers demonstrated a proof-of-concept malware, Morris II – a “zero-click worm” targeting generative AI-powered email assistants. The worm uses adversarial self-replicating prompts (embedded within text or images of an email), which prompt the generative AI email assistant to engage in malicious activities (e.g. spamming/exfiltrating personal data), and spread the malware to other generative AI models within the ecosystem. The researchers predicted that such generative AI-proliferated worms are likely to become more prevalent in the next two to three years, as more tech products become integrated with generative AI capabilities.
This incident highlights how generative AI tools may be vulnerable to prompt injection attacks. Malicious actors may target these generative AI tools through malicious prompts in order to infiltrate an organisation’s systems, and even worm its way across multiple systems on a network.
Example: PromptWare – “zero-click malware attacks” on GenAI models
In August 2024, researchers demonstrated the PromptWare and Advanced PromptWare attacks, which could force generative AI models to perform malicious activities beyond just providing misinformation and returning offensive content. These “zero-click malware attacks” do not require a threat actor to have compromised the generative AI model prior to executing the attack itself. Instead, they work by providing jailbreaking commands as user inputs to trigger malicious activities.
In a PromptWare attack, where the attacker has prior knowledge of the model’s logic, a threat actor may provide malicious input that forces the execution of an application to enter an infinite loop, triggering infinite API calls to the generative AI engine, which over-consumes computational resources, resulting in a denial-of-service (DoS) attack.
A more sophisticated version of the attack, known as Advanced PromptWare, can be executed even when the generative AI model’s logic is unknown, through adversarial self-replicating prompts to reveal the underlying context and assets of a generative AI application. For example, an attacker could use user prompts to trigger the modification of SQL tables, to potentially change the pricing of items being sold to a user via a generative AI-powered shopping app.
Although a jailbroken AI model in itself might not pose a significant threat to users of conversational AI, it could cause substantial harm to generative AI-powered applications (in which the model outputs are used to determine the flow of the application) as a “jailbroken” model could change the execution flow of the application and trigger malicious activity, including DoS attacks, data breaches or financial fraud.
AI SECURITY STAKEHOLDERS, AND THEIR ROLES
Roles of Stakeholders
Addressing the security risks of AI is a multi-stakeholder effort, both within an organisation, as well as at the ecosystem level. These key actors play complementary roles to ensure AI systems are trustworthy and secure. Relatedly, this means a critical and necessary inter-dependence between these actors in advancing best practices and measures for the security of AI.
Across the AI supply chain, foundation model developers and AI vendors that develop and release AI products should build secure-by-design and secure-by-default products. They are in the best position to understand and address vulnerabilities upstream, before models and products are released to organisations and users.
System owners and development teams should ensure that they have a strong understanding of the benefits, risks and potential trade-offs involved in the use and deployment of these AI models and products. This can involve security risk assessments, ensuring that involved personnel are trained with the necessary expertise. This work should be supported by information security teams that can help to establish the necessary environments and infrastructure for the integration of AI, and ensure compliance with the relevant governance. This will allow the secure development, adaptation or integration of AI models/components into services, business environments and workflows. However, similar to classical software cybersecurity, enterprise buyers and builders remain accountable for the security of their AI systems, and users also need to continue to check and verify that the outcomes of AI deployment are safe and secure (e.g. preventing over-reliance).
At the ecosystem level, stakeholders in the AI supply chain are supported by third party AI assurance providers, including providers of AI testing products, tools, solutions and services of AI security community; as well as by security researchers that help to develop and contribute to growing the body of knowledge on AI security risks and best practices.
Finally, regulators, policymakers and standards bodies should help to put in place the underlying frameworks, structures and governance measures that help to raise the baseline for the security and resilience of AI, while facilitating continued growth and innovation in this space.
![](https://assets.isu.pub/document-structure/241015020515-7c44ae45711de2922797ada8c1e68dc5/v1/694f4bc5142b8a28d20701dbd0e6af6c.jpeg)
![](https://assets.isu.pub/document-structure/241015020515-7c44ae45711de2922797ada8c1e68dc5/v1/694f4bc5142b8a28d20701dbd0e6af6c.jpeg)
![](https://assets.isu.pub/document-structure/241015020515-7c44ae45711de2922797ada8c1e68dc5/v1/694f4bc5142b8a28d20701dbd0e6af6c.jpeg)
![](https://assets.isu.pub/document-structure/241015020515-7c44ae45711de2922797ada8c1e68dc5/v1/694f4bc5142b8a28d20701dbd0e6af6c.jpeg)
Table 2: Roles and Impact of Stakeholders in the AI Security Ecosystem
Stakeholders Role in enabling AI security Contribution to the ecosystem
Foundation Model Developers Secure training data and code, implement sufficient defences to improve model robustness and maintain rigorous monitoring and compliance practices.
AI Vendors Develop and sell AI systems that meet AI security best practices and standards. Conduct comprehensive risk assessments to ensure security capabilities in their offerings are robust.
Enterprise AI Buyers
Procure and deploy third-party AI systems that are trustworthy and secure.
Enterprise Inhouse Developers Build internal AI systems that are trustworthy and secure.
End Users Be equipped to interact with AI systems within and/or outside the enterprise environment (e.g. internal knowledge retrieval LLM, customer service chatbot) in a responsible manner.
Academic Researchers / Think Tanks
Conduct research on new attack and defence mechanisms for AI security.
Cybersecurity Solutions Providers Augment enterprise solution stacks with additional AI-powered security tools and services, improve integrations among security solutions and prepare enterprises for incident management and response.10
Collaborate with stakeholders in establishing shared standards and integrate best practice security measures across platforms.
Being in an ideal position to address risks upstream, collaborate with the ecosystem to increase security posture and address new vulnerabilities.
Set rigorous AI security requirements and procurement evaluations that are fair to vendors and enable clear accountability for AI security outcomes.
Comply and collaborate with the Information Security teams in designing and building in rigorous AI security requirements into their products and solutions.
Shape enterprise demand for user-safety requirements by seeking information on AI applications and its risks, and reporting security incidents and concerns to relevant authorities.
Advance the field of AI security, raise awareness of novel risks and mitigation techniques and partner stakeholders to translate research into policy and industry best practices.
Identify novel AI-powered security risks and threats. Harden enterprise AI systems acquired along the supply chain. Contribute to the cybersecurity community with new insights and capabilities. Critically, AI security providers can support other actors in the ecosystem with specific technical tools/services and recommend the best practices on governance.11
Third-Party AI Assurance Provider
Independently assess and test AI systems throughout their life-cycles for model vulnerabilities and threats. Implement safeguards to manage risks across various safety-critical scenarios.
Information Security Teams Identify cyber, governance, risk and compliance risk vectors within the Enterprise Buyer/Developer teams.
Provide objective, independent evaluation, build trust in the ecosystem, train enterprise leaders and policymakers, enhance safeguards for enterprises, and contribute to AI security standards and open-source community.
Implement mitigation strategies to safeguard internal AI systems, data and infrastructure by implementing and maintaining security measures. Foster trust in the usage of AI systems among users and leaders by ensuring a robust, comprehensive implementation of security risk mitigation strategies and tactics.
Standards Bodies Develop standards for AI security practices.
Regulators Create and enforce best practices and regulations for trustworthy and secure AI systems development and deployment.
Policymakers Collaborate with the AI security ecosystem stakeholders to develop policies, platforms, and funding mechanisms to protect the public and institutions from cybersecurity harms.
Seek alignment across a diverse group of stakeholders to develop comprehensive and relevant standards.
Ensure legal compliance, collaborate with industry, standards bodies and academia to comprehend AI risk vectors, develop appropriate regulations.
Facilitate the growth in knowledge, capabilities and tools to achieve trustworthy and secure AI.
Securing AI systems is a collective responsibility, and governments, academics, as well as industry players must all do our part.
Moving forward on AI security as a community
Policies and thinking around AI security are still in its early days. We are still building our understanding of adversarial attacks, model testing and hardening methods, as well as what appropriate and effective security controls can be made. It will take concerted efforts across the AI security ecosystem to ensure that we can create frameworks and solutions that are practical, effective and useful. Ultimately, this will allow us to work towards our goal towards AI that is secure, safe, and trustworthy.
We discuss four key pillars, or lines of effort below that governments, industry, experts, and academia can work on together: (a) building robust and practical AI governance frameworks; (b) developing technical tools and capabilities that can support secure adoption of AI; (c) investing in research and development to identify emerging threats to the security of AI, and ways we can address them; and (d) supporting TIG programmes that can support the growth and development of the secure AI ecosystem.
Underpinning these four lines of effort is the foundational enabler of cooperation – both across the AI ecosystem, as well as internationally. Securing AI systems is a collective responsibility, and governments, academics, as well as industry players must all do our part. Furthermore, as AI systems operate across borders, we will also need to collaborate internationally to address AI security risks, such as through the sharing of knowledge, resources and best practices, and in the development and harmonisation of standards.
![](https://assets.isu.pub/document-structure/241015020515-7c44ae45711de2922797ada8c1e68dc5/v1/e936e15d68bbc2a34955a8e263eab64d.jpeg)
Pillar 1
: Governance
AI governance must account for security, in addition to addressing other risks such as safety and ethics. Governance has taken many forms internationally. For example, we have seen some jurisdictions introduce AI-specific legislation (e.g. EU’s AI Act). We have also seen countries issue voluntary guidelines for users and companies to reference when developing or deploying AI (e.g. US, UK, Australia). It is unlikely that there will be global consensus on what the best approach is, especially when the technology is still rapidly evolving and advancing.
In Singapore, we take a proactive, pragmatic and balanced approach to address AI risks and to support the responsible development and deployment of safe and secure AI. This means that all AI developers, deployers and users – whether individuals, or organisations – need to be informed about the potential risks, and empowered to address them.
To support this, IMDA has implemented governance frameworks and tools for data and AI since 2012. Most recently, in May 2024, IMDA launched the Model Governance Framework for Generative AI, which serves as a roadmap for how we can address AI risks while facilitating innovation. This framework highlights security as one of nine dimensions that are needed for a trusted AI ecosystem, and calls for development and deployment to include steps that address the security of AI. IMDA has also released tools to test traditional AI and generative AI against internationally recognised AI governance principles, including safety, security and robustness – AI.Verify and Project Moonshot respectively – which allow more AI users, developers and service providers to identify and address potential risks.
We are also developing practical guidelines which practitioners can reference as they develop AI models and applications, and integrate AI with their existing systems. In Jul 2024, IMDA announced that it would introduce a set of safety guidelines for generative AI model developers and app deployers, under the AI Verify framework. CSA also released draft Guidelines and a Companion Guide on Securing AI Systems for public consultation in July 2024, to help system owners secure AI throughout its lifecycle. This builds on the “Guidelines for secure AI system development”, which CSA co-sealed with the UK, US, and other partners in Nov 2023. These resources can provide a strong foundation for organisations that want to ensure the safety and security of their models, before they are widely deployed.
CSA is also planning to revise and expand the scope of its current cybersecurity certification for organisations, Cyber Essentials and Cyber Trust mark to incorporate AI security. Cyber Essentials mark is targeted at smaller/ less digitalised organisations, and the standards will focus on the “security essentials” for AI users in an organisation. Cyber Trust is targeted at larger/ more digitalised organisations, and will provide the risk assessment framework for cybersecurity and AI security.
Singapore’s approach
What
Pillar 2 : Ecosystem Capabilities
What is it and why is it important?
To foster the secure adoption of AI, we need to empower users with tools and options. This includes both upstream and downstream capabilities to ensure robust implementation of security measures across the entire ecosystem.
Upstream, AI developers and vendors will need to build up their capabilities in developing AI products that are secure by design and secure by default. They are in the best position to address security risks within AI products upfront, by integrating security into every step of the systems development lifecycle. This includes incorporating security specifications into the design of AI products, and ensuring there is continuous security evaluation at each phase and adherence to best practices.12 As with traditional IT companies, AI vendors can use security to differentiate and set their products apart from their competitors.
At the same time, as AI becomes increasingly integrated into IT environments, security will need to be a shared responsibility between AI vendors (who need to build secure-by-design AI products), and Enterprise buyers (who need to ensure that AI products are securely and appropriately configured and integrated). Similar to cloud security, there may be a need for AI vendors to provide more clarity on the roles and responsibilities of different users in the security of AI systems (such as through shared responsibility frameworks)
Downstream, enterprise buyers will need to learn to become more informed consumers of AI. They need to be able to ask the right questions of AI vendors, to verify and validate that third party AI products meet their minimum security requirements. Governments can help to provide relevant resources, such as frameworks and guidelines, to support organisations with understanding and mitigating the security risks of AI adoption.
Larger organisations and those with higher-risk AI use cases, such as Critical Information Infrastructure (CII) owners, can also consider building up in-house AI security capabilities, to ensure that their cybersecurity defences continue to evolve to keep up with new developments in the AI landscape.
There is also a growing role for third-party AI security assurance providers to provide AI security testing and assurance services. As AI products become increasingly adopted in higher-risk use cases, such as in CII sectors, the demand for security assurance services for AI systems is likely to grow. At the same time, this is likely to remain an evolving space, as AI security is a nascent field with no established international standards, and as AI technologies continue to evolve. As new security standards are introduced, third-party AI security assurance providers will need to develop ways to test and evaluate the security of AI products according to these standards, even while being ready to continually update and refine their testing methodologies as the AI landscape evolves and new risks emerge.
In Singapore, the Government supports the buildup of capabilities in AI security, for example through the Cybersecurity Industry Call for Innovation, or CyberCall, which seeks to catalyse the development of innovative cybersecurity solutions by enabling cybersecurity companies to work with large, trusted end-users, including CII owners, on innovative solutions that address cybersecurity needs, including in AI security. We will also help the growth of the industry through Government led initiatives for export and growth to help companies scale.
The Government has also started to build up its own in-house AI security capabilities. For example, the Singapore Government Technology Agency (GovTech) has set up an adversarial AI red teaming team, to find and test vulnerabilities in AI products, and develop a better understanding of the security risks.
Pillar 3 : Research & Development
What is it and why is it important?
Research & Development (R&D) in AI security is paramount due to the nascent and rapidly evolving nature of AI. As AI advances, it is inevitable that new attack vectors and vulnerabilities continually emerge, necessitating ongoing innovation and research into investigating and defending against them. R&D will help to drive the development of robust security defences and mechanisms that can keep pace with the sophisticated threats targeting AI systems, and inform policy makers, developers, adopters on the next areas of focus. Overall, R&D helps foster deeper understanding of AI capabilities and limitations, enabling the creation of safer, more secure, and trustworthy systems that will support adopters in realising their goals.
In Singapore, AI Singapore, the Digital Trust Centre and the TrustworthyAI Centre conduct R&D in AI safety and security. Within the framework of NAIS 2.0, Singapore strategically prioritises AI research to optimise limited resources and align efforts with existing national initiatives. This targeted approach aims to maximise the impact of research outcomes, fostering a sustainable and secure AI environment.
Collaboration between industry and academia plays a crucial role in Singapore’s effort to enhance security in AI. By nurturing partnerships and fostering joint R&D initiatives, Singapore capitalises on the diverse expertise and innovation capabilities across both sectors. This collaboration ecosystem not only accelerates technological advancements but also cultivates a robust pipeline of talent equipped to tackle complex challenges in securing AI.
Furthermore, Singapore actively promotes global collaboration to strengthen its position in AI security on the international stage through initiatives such as international grant calls and PhD training programmes. With the support of the National AI Group (NAIG) and CSA, AI Singapore (AISG) and CyberSG R&D Programme Office (CRPO) launched the Grand Challenge for Secure LLMs in July 2024 to bring together the AI and cyber community to drive understanding of AI exploits and defences, Singapore will continue to build networks with global thought leaders and institutions to facilitate knowledge exchange, drive innovation, and contribute to global AI security and governance discussions.
Singapore’s approach
Singapore’s approach
Pillar 4 : Talent and Workforce Development
What is it and why is it important?
Managing AI or cybersecurity already requires in-depth domain knowledge and skills, Security of AI is the intersection of both AI and cybersecurity domains, and requires professionals and a trained workforce that can understand the intricate components and processes involved in developing and operating AI, in addition to knowing how to apply the requisite security safeguards to mitigate risks to confidentiality, integrity, and availability across the lifecycle.
Singapore’s approach
Given the prevalent adoption of AI across organisations, Singapore’s NAIS 2.0 sets out strategies to boost the AI practitioner pool in Singapore, through scaling up AI-specific training programmes (such as the AI Apprenticeship Programme) to increase the number of people exposed to AI product development, scaling up technology and AI talent pipelines through preemployment training, and attract global AI talents. This is complemented by CSA’s Cyber TIG Plan, which serves as a comprehensive approach to cybersecurity ecosystem development. Under the plan, the CyberSG TIG Collaboration Centre was launched in Jul 2024 to implement a range of programmes including the following under the talent development pillar: nurturing cybersecurity talent from young, converting individuals in adjacent professions to take up mid-level cybersecurity roles, and equipping noncybersecurity professionals with foundational cybersecurity knowledge.
Beyond these programmes, we will work with academia, industry and government agencies to cater more AI security training and development programmes, to build a robust talent pipeline that can support organisations in their secure AI journey.
CONCLUSION
As AI becomes increasingly adopted and integrated into critical business and operational processes, the risk of AI being exploited by malicious actors will only continue to grow. Left unchecked, this may threaten user trust in the technology. To ensure that the secure adoption and use of AI, all stakeholders in the AI ecosystem – including governments, industry, academia and experts – will need to play a part.
The challenges of AI security are not unique to Singapore or any one country. We are therefore keen to work with the international community, including both governmental and non-governmental experts, to collectively address AI security issues, such as in supporting norms setting for secure AI, and developing/ contributing to international standards.
As the security of AI remains a nascent and rapidly developing space, new threats and vulnerabilities will likely continue to emerge over time. We welcome feedback and ideas from the AI community and ecosystem on how we can collectively work towards addressing these emerging threats.
www.csa.gov.sg resaro.ai