1
Conference Report on "Utilizing Artificial Intelligence (Al) in Middle East and North Africa (MENA)” By Samuel Bendett Adjunct Senior Fellow, Center for a New American Security
Context On 07-09 November 2023, NATO Allied Command Transformation, under the Open Perspectives Exchange Network framework and the NATO Strategic Direction-South Hub, held the OPEN Study Days Conference in Naples, Italy. This year’s theme was “Utilizing Artificial Intelligence (Al) in Middle East and North Africa (MENA)”. The conference brought together both regional and NATO subject matter experts to discuss how artificial intelligence is applied across the MENA. The speakers and participants represented military and civilian communities working on AI in security and economy.1 The purpose of these Study Days was to enhance and develop NATO's understanding of the potential medium- to long-term challenges as well as opportunities within MENA by utilizing Al, and how these challenges might impact the stability and the social and economic development of this area as a whole. The OPEN Study Days Conference provided a key discussion platform with five sessions over two days. The event aimed to foster a better understanding of the region’s limitations, impacts, challenges and opportunities, while highlighting how AI can drive growth and cooperation and reduce conflicts.2 The deliberations included topics such as AI use by regional governments and industry strategies; international and national legal and regulatory experimental attempts; AI and data exploitation technologies, their ownership, intended use, and associated risks; as well as AI's role in disinformation activities and addressing hate speech online.
Panel discussions GOVERNMENT AND INDUSTRIAL STRATEGIES AND POLICIES IN MENA: The impact Al has on critical infrastructure, capabilities and civil preparedness creating potential vulnerabilities. Mr Joseph Wehbe, Dr Saeed Al-Dhaheri, and Dr Vangelis Karkaletsis. During the opening panel, the speakers noted that when it comes to the application of AI across industries, economies and societies, the key aspect is fostering talent – training and educating people to understand the issues and to grow the AI high-tech workforce. For this, local AI development ecosystems are key. A major point of distinction was made by Mr Wehbe about growing such an ecosystem that goes beyond the traditional Western norms and encompasses local capital, research efforts and talent. This ecosystem includes cooperation among local communities, entrepreneurs and governments working to develop ethical, institutional and scientific approaches to the development of AI as a technology and a mechanism. Another significant aspect named for regional AI development is the diaspora. The support of such a diaspora is often key for home states, with North America as the home to many high-tech and 2
Information and Communications Technologies (ICT) diasporas – such as from Saudi Arabia – that facilitate key domestic developments in the members’ home countries. Dr Al-Dhaheri’s presentation highlighted the United Arab Emirates (UAE) as one of the most successful examples of a country with a truly global reach in artificial intelligence research, development, implementation and use. This country is often cited as an example of building capacity with strong government support that translates into key human and financial investments with significant short-, medium- and long-term impact not just on the UAE, but on the broader Middle East region and around the world. At the same time, the UAE has vulnerabilities that can impact its development. For example, 90% of the country’s food is imported, and local and independent supply chains are needed to replace such dependency. Likewise, for the UAE, growing local talent versed in high-tech development and know-how is very important to maintain its leading regional role in AI. One of the important questions raised during the discussion is whether critical infrastructure that impacts so many people across the region can be entrusted to artificial intelligence. Dr Al-Dhaheri also pointed out key vulnerability points concerning AI use in critical infrastructure, which included perpetuation of bias and discrimination, misinformation due to intrinsic hallucination in foundation models3, human manipulation, and concerns about data privacy, transparency and other complexities and uncertainties. An example of AI governance cited by Dr Karkaletsis included an effort in Greece, where education, training, and re-skilling programs consider the teaching of basic notions and skills about AI systems and their functioning. Another example included the popAI Network, which is an inclusive ecosystem and mapping of key stakeholder types and categories including law enforcement, academia, researchers, policy makers, technology providers, civil society and vulnerable group representatives. The speakers and participants have agreed that the lack of transparency of AI technologies and tools also complicates their acceptance by users, such as law enforcement and regular citizens. Therefore, there is a need for human-centered and socially driven AI that can improve the citizens’ perception of security as a cross-functional effort that needs to be commonly agreed at international level. The participants also remarked that the “side effects” of AI technological solutions in the security domain need to be considered, both from the point of view of citizens and from the point of view of law enforcement. Such side effects include the impact of AI on their jobs and their organizations, and how the use of AI will affect their decisions in the professional and personal domains. Moreover, legal issues that still have to be addressed across the region include the use of data to train algorithms; the incorporation of AI into the regulatory, legal and law enforcement domains; what AI-influenced action or information is forbidden and in what cases; and finally, who is held accountable when an AI-enabled regulatory or law enforcement mechanism is used against citizens and societies. One of the key issues highlighted for the MENA region included the vulnerability to cyber-attacks, with the Middle East as one of the top five regions in the world with the highest percentage of malware blocked in industrial control systems (ICS). 3
The speakers noted that there is a need to train law enforcement and regulatory bodies on AI literacy. Recommendation for law enforcement agencies dealing with the advent of AI included a detailed plan describing the measures and tools that will help minimizing the relevant risks identified, notifying the national supervisory authority and relevant stakeholders and representatives of groups most likely to be affected by the AI systems, and regular progress reviews and updates. At the same time, the participants highlighted the need for more action on AI ethics and risks through responsible artificial intelligence practices and regulation across the MENA region.
GOVERNMENT AND INDUSTRIAL STRATEGIES AND POLICIES IN MENA Panel: The role Industry has in the development of Al. Dr Vangelis Karkaletsis, Mr Joseph Wehbe, and Dr Al-Dhaheri. During this panel discussion, the speakers noted that the MENA region in general should benefit from the increasing use of AI and other high-tech development, with international watch bodies estimating the net positive economic effect in the many hundreds of billions of dollars through this decade. Dr Al-Dhaheri described UAE’s plan to become a leading nation in AI by 2031, which includes government planning, investments, education, partnerships and incentives to the domestic and international communities. Building skilled AI talent is key to this plan, with emphasis on data science and Machine Learning (ML), with the overarching need to focus on AI ethics in the process. During the presentations and the subsequent discussion, the panelists have agreed that the key regional step is to enable AI literacy to ensure that populations are familiar with both the technology and its impact on the regional socio-economic makeup. Examples of UAE-based and European initiatives have come up as a way forward for MENA as a region when it comes to understanding the impact of artificial intelligence use. A key suggestion was to share that information and knowledge with the states and governments across MENA. The participants have agreed that voluntary adoption of ethical and responsible AI is not enough and that this process should include transnational approaches backed by government and industry assurances. Mr Wehbe reminded the attendees that important AI solution aspects include strategy, data, framework and infrastructure, along with investing in people to enable their access to key technology, and regulatory and infrastructure resources. Local industries have a key role in promoting AI as a tool for improving the economic, academic and other aspects, and they must be incentivized to sell and promote ready solutions while fostering the talent base. This last point was one of the main cross-cutting themes not just during this panel, but throughout the workshop: in order for the MENA societies to take full advantage of AI, it is essential to educate young people, current workers and the future workforce to attain new skills. Establishing local high-tech and development ecosystems is key for countries to take advantage of domestic talent and to have enough educated individuals to promote, integrate and facilitate AI across industries and states.
4
Some of the key points articulated by Dr Al-Dhaheri revealed the dependence on the global supply chains when it comes to key advanced and high-tech components necessary for AI research, development and implementation. This dependence usually involves key components like the advanced microchips and microprocessors, a pattern that was revealed following the global changes that took place during the COVID-19 pandemic. Countries that are investing vast resources in AI development in general should also promote domestic manufacturing capacity. Given the global trends towards greater domestic high-tech manufacturing, it is likely that wealthier and more developed nations like the UAE can achieve results with investments and developments also to attract international high-tech talent. Further discussions included the need to bridge innovation hubs across the region and foster greater cooperation between Arab states as a way to offset the dependence on foreign supply chains and key products, and to facilitate the movement of talent and knowledge across the region. At the same time, participants called out certain issues that still affect parts of MENA that impact development, such as the need for electricity and water, the costs of launching new and especially high-tech enterprises, and certain problems with adapting new technologies, all due to an uneven economic and industrial development pace across the region. While discussing current examples in developing and adapting AI, Dr Karkaletsis noted that the overall and fundamental element of this strategy is to develop and promote human-centric and trustworthy artificial intelligence policies, and to foster AI-powered innovation to benefit the industry, commerce and society. Examples of “one-stop-shop” and a “marketplace” for available access to existing and new AI-related assets were named, such as the DIANA Acceleration Services at the Demokritos Institute as a key source for mapping technological and industrial needs in Greece’s Attica Region and at the national level. During the discussion, the participants and attendees raised the following issues related to the adoption and use of AI. First, AI literacy helps to equip adopters to deploy and use artificial intelligence in a responsible way. Second, there are major opportunities in the larger Africa region in general, with AI and high-tech potentially applied in healthcare, education and transportation. At the same time, the Persian Gulf countries will get larger benefits, since they are wealthier and have better resources that include stable governments and advanced infrastructure. Third, a major issue related to fostering talent and AI education is avoiding inequality, with states needing to show willingness for change, progress and support for institutions. Fourth, the participants and attendees noted that more collaboration is needed in general, especially among the Arab states to share key principles, tactics and lessons learned. Finally, AI development and talent fostering need to be sustainable to mitigate multiple MENA differences and country-specific problems. INTERNATIONAL AND NATIONAL LEGAL REGULATORY SANDBOXES: Organizational, legal, and cultural limitations to acquiring and employing Al. Dr Ndubuisi Nwokolo, Dr Malak Trabesi Loeb, and Dr Nebile Pelin Manti. The MENA region is socially complex and religiously conservative, with significant differences between what AI can do and what the prevalent regional religions can protect. This was perhaps some of the most important and, at the same time, underrated points made during the workshop. 5
Dr Nwokolo pointed out that it is important to understand that the MENA region will continue to experience a “clash of civilization and culture” due to AI implementation, especially between the young progressives and the older conservative populations in many states. Countries like UAE, Saudi Arabia and Qatar are able to strike a middle path between the two, and can potentially reduce the ideological war while attracting the best brain power and talent to work in their countries and the region. Dr Nwokolo made a point that Africa is a massive continent that is diverse in terms of key indicators discussed, such as the strength of governments to uphold and enforce laws, protect the population and ensure security. While some African countries may have seemingly powerful authoritarian executive bodies reinforced by the military and internal security services, the overall ability of such governments to promote and enforce laws may be rather weak given a number of reasons. Such reasons often include underdeveloped infrastructure, diverse ethnic and tribal makeup of the country, lack of adequate access to education and healthcare, along with criminal activity and past or ongoing civil disputes and inter-ethnic or inter-religious conflict. Additionally, such reasons may include the distrust of central authorities, along with outside influence by actors such as transnational and non-state organizations and movements. Dr Nwokolo referred to the Sahel as a key region where many adverse activities on the continent originate, including terrorist activity that impacts both the local populations and the regional security make-up, with consequences felt farther away across the continent and the broader MENA region. There are also collaborations between terrorist groups in MENA and those who operate in the Sahel. Dr Nwokolo noted that groups in the Sahel pay allegiance to the groups in MENA in return for support, which could include resources such as weapons, military technology development and augmentation, aerial drones, and potentially advanced technology that may eventually include AI. The governments in the Sub-Saharan Africa and the Sahel lack the ability to monitor and control the usage of such technology, unlike in the more stable and secure parts of the Middle East, such as the UAE. Dr Nwokolo also noted the special concern that terrorist movements and organizations have been observed to be early adopters of emerging technologies such as commercial drones, which tend to be under-regulated and under-governed in general, facilitating their proliferation. Given the vast amount of AI-related knowledge and data publicly available at the commercial, private sector and academic levels, it is not a stretch to imagine such terrorist organizations eventually adopting AI technologies and AI-enabled tactics, while sharing them across the region where many countries simply cannot enforce the law and security. Therefore, the closeness of the Sahel and North Africa makes the flow of illicit resources by terrorist groups easier, proving that while the Sahel and Africa may be geographically far from NATO states, it is “technologically near” and, consequently, should be of interest. Dr Loeb’s discussion of key approaches undertaken by the UAE was a reminder that long-term AI and high-tech development goals are achieved when backed by strong government and key financial and infrastructure resources that go beyond the home country to positively impact the larger MENA region. Specifically, she focused on the UAE's pioneering role in AI adoption, aligning with its National AI Strategy 2031. This strategy is a cornerstone of the UAE's vision to 6
emerge as a global AI leader by the next decade, reflecting its commitment to integrating AI across various sectors for economic and sustainable development. Dr Loeb discussed recent legislative developments in the UAE concerning AI, including regulations of autonomous vehicles and data protection. These examples demonstrated the UAE's proactive stance in creating a regulatory environment conducive to AI innovation, while guiding the ethical development and deployment of AI technologies. Dr Pelin Manti covered the impact of artificial intelligence on military operations, noting that today’s modern military operations are conducted in a dynamic, changing and challenging environment, while intelligence gathering technologies, communications and assets provide a constant data flow. Current advances in ICT support decision-makers in the process of accessing and making use of these vast amounts of data and information, while providing solutions to aid in making timely and well thought decisions. Many of these processes – especially among the world’s most advanced militaries – are already driven by powerful AI engines, use modern simulation software, and use machine learning algorithms. Dr Pelin Manti also noted that in military decision-making, the time between pre-programmed decision criteria and the autonomous attack escalates the risks of unintended consequences for both the military and civilians that may be caught in combat. Moreover, AI systems are extremely vulnerable to intentional perturbations. While most of these emerging technologies are useful, the absence of a global legal framework that addresses the concerns above is an issue worth considering. The panelists and attendees noted that artificial intelligence can be an economic leveler, and can equal the playing field between different societies, economies and countries. While such developments can bring positive net gains to states and their societies, AI can also increase inequalities and marginalization among the technology “haves” and “have nots”. Yet despite the challenges that AI use brings to the MENA region, AI can also help to reduce ethnic, cultural and religious profiling: AI algorithms and applications such as natural language processing and image recognition can help reduce profiling and help buy back the hearts and minds of communities. Still, as many modern AI use cases have already shown, artificial intelligence can also entrench biases, especially racial ones, and potentially lead to human rights violations. The participants also noted that current rapid development of AI technologies and concepts does not allow for clearly defined lines when it comes to definitions, uses and frameworks. In military and security AI applications, the ability to make decisions that are “sufficiently” right or “not completely wrong” widens the range of possible acceptable outcomes. However, the moral strength of human decisions is based on a lifelong ethical development that typically stems from humanitarian concerns – that is, an entire human experience. Therefore, it is essential for the international community to develop a risk management methodology for ethical, safety and legal risks and compliance with relevant AI taxonomies. This is necessary to reduce pressure on the users and to develop governance and oversight principles that are ethical and enforceable. At the same time, regulatory frameworks require precise definitions and conceptions of key notions. However, there are different and multiple interpretations of the policy responses necessary to address the broad discussions related to human responsibility in governance, security and warfare. The panelists have agreed that good governance can handle difficult issues 7
related to the use of AI in conflicts and warfare, yet “governance and law are not the same”, given the current absence of significant global enforcement mechanisms. A key solution would be to shape the governance of AI use according to law, though questions remain regarding a common denominator for what such law should be. One of the main difficulties associated with the earlier point is that each country wants to regulate AI on its own. Another key issue discussed was on the impact of AI on the labor force across the region, with the potential to displace humans from many professions, thus derailing attempts at economic and social improvements for many. The speaker and participants agreed that the most obvious and long-term solution is for MENA societies to prepare early for such an outcome, and for people to be retrained for new skills and professions in advance to stem off the most disruptive after-effects of AI-enabled economic, industrial and high-tech transformation. These proposals also included the “Emiratization” concept – boosting local employment in the public and private sectors in a meaningful manner across the region that may be impacted by AI.4
STRATEGIC CONVERGENCE OR DISRUPTION: Al and data trends and understanding adversarial Al and data exploitation technologies, their ownership, their intended use and associated risks. Dr Vangelis Karkaletsis, Dr Carlo Sansone, Dr Edward Christie. During the first panel on the second day, the participants and attendees have discussed adversarial AI and data exploitation, such as attacks that aim at bypassing an Automated Fingerprint Identification System (AFIS) by using an artificial fingerprint replica. Additionally, they learned about perturbation attacks that aim to mimic and fool the algorithms and established models, since humans cannot always determine in such cases when and if an attack has taken place. The participants also learned about so-called patch attacks that are difficult to do in practice, which is a type of perturbation attack that works by adding a block of visual data onto the original input. The presenters noted that effective real-life AI attacks on different types and kinds of systems are not easy to implement. For example, the literature on patch attacks is often based on the AI being fed “static images with the patches pasted on top” – such patches have not been placed on real weapons and systems yet. The presenters noted a 2022 RAND report that found that “adversarial attacks designed to hide objects pose less risk to the U.S. Department of Defense applications than academic research implies” and that “in the real world, such adversarial attacks are difficult to design and deploy.”5 The participants contemplated whether a distinct set of rules of engagement for autonomous systems that are capable of deception is in fact realistic. Moreover, deep learning (DL) models to execute some of the AI attack types require massive amounts of data to be trained, and in most defense applications this requirement is not met. Some of the examples of counter-adversarial AI applications included the Frugal and Robust AI for Defense Advanced Intelligence (FaRADAI) project that focuses on building DL models for defense applications. Specifically, FarADAI objectives include the adoption of “transfer learning” and
8
domain adaptation techniques in order to train DL models for defence applications with few data by transferring knowledge from other domains. The presenters also noted the need to build DL models that are robust to adversarial attacks, and to build semi-automatic data annotation pipelines to increase the speed of data integration into the DL training procedures. During the discussion, the presenters and attendees noted that nonstate actors are consumers of technology who also use old style deception. The discussants also warned end users such as militaries and governments not to be overconfident with technology. Even commercially available AI products can become a significant challenge and a system threat, such as the example of looting money from a UAE-based bank using deepfake voice technology.6 One of the key questions raised again involved concerns over mapping out what specific technology a terrorist and non-state group wants to use, with ISIS (ISIL) listed as a potential transnational threat that can transfer technology to other terrorist groups and formations. Therefore, the key issue raised was how to intercept the users of this technology. Some of the advice to NATO specifically included improving collaboration with the MENA region. Finally, the participants discussed ensuring that the human element in the human-AI collaborative scheme is aware of different threat types that can be created with or emanate from AI use.
STRATEGIC CONVERGENCE OR DISRUPTION: Al role in disinformation activities (synthetic content/deep fakes), counter disinformation and hate speech online. Dr Vangelis Karkaletsis, Dr Carlo Sansone, and Dr Edward Christie. During the final panel discussion, the participants and attendees were treated to an overview of different disinformation-type activities by states, political movements and individuals. Dr Sansone presented the framework for bullying and cyberbullying action detection by computer vision and artificial intelligence methods and algorithms. Dr Christie discussed the historical background to disinformation, noting that disinformation activities in the pre-Internet age were just as successful on a large scale as AI attacks can be in our time, citing the examples of Soviet Union political activities against the United States and NATO.7 The presentation noted that the social media revolution over the past decade and half ensured that people worldwide could have massive connectivity, that acceptance via anonymity is becoming the norm in all manner of communication, and that there is a massive expansion of options for associations of individuals, i.e., group formation and following behavior. There is also a massive lowering of barriers to entry for content creation, and weak regulation of the new class of content creators. Moreover, the social media and technology revolutions have also enabled massive speed and near-zero cost of information sharing, while anonymity massively eases human creation of fake persona, as well as the impersonation and theft of imagery of persons or events. Additionally, the easy content creation allows for the rise of non-professional, single-person or small-scale television-like content – debates, lectures, documentaries, presentations on the most important matters. Finally, anonymity of users and troll farms allows for liking, sharing, and 9
boosting to be weaponized by any resourceful organization, most notably by states such as Russia. The speakers noted that there is already an arms race taking place between data generation and fake data detection. Today, a deep fake video of a public figure saying something is not the main danger, since governments and serious actors can and will verify this information, as was the case with the 2022 deep fake of Ukrainian President Zelensky allegedly surrendering to the Russian military. At the same time, official organizations will (and already do) give more credence to established sources. The greater danger is the human factor, with people and populations susceptible to such messaging, with some people already living in “parallel” online worlds. Therefore, one of the main concerns is proven falsehoods that are tagged online but not removed, that can still spread, and are still believed. The speakers pointed out that deep learning has advanced radically in such a way that it can surpass human-level performance in automatic speech recognition, object detection, image retrieval, natural language processing and even sound event detection. The speakers noted that the vast majority of dis-, mis- and malinformation mitigation strategies target media professionals who often conduct AI or manual fact checking. For such media professionals, early detection is key, since false information can be published in a news outlet but not yet published on social media, where it can be disseminated worldwide. Mid-stage detection involves such data already posted on social media, but before going “viral”. Finally, late detection involves data after deep propagation on social media. Therefore, solutions to the above-described problems must focus on increasing awareness, media literacy, and critical thinking. Dr Karkaletsis presented on the technology solutions available in Europe as examples for mitigation strategies, such as the AI4TRUST effort that aims to enhance the human-based response for tackling mis/disinformation by empowering researchers and media practitioners with advanced AI-based technologies. The effort involves textual classification of misinformation, disinformation, claim detection, evidence retrieval, logical fallacies identification, detection of synthetic images and deepfakes, and other aspects aimed at identifying false or falsified data. Another effort presented, the TITAN, empowers citizens to examine the facts they read online through a logic-driven investigation using a personalized AI coach.8 TITAN implements an intelligent ‘chatbot’ capable of guiding the citizen to logical conclusions about the factual correctness or reliability of a statement, and provides a personalized experiential learning approach to media literacy. During the discussion, the participants and attendees noted the following key issues: there is a massive spread of propaganda worldwide targeting different societies and individuals, with tailored approaches to information consumption. Moreover, some of these efforts involve fairly primitive fakes, such as using videos from the Syrian civil war to describe the Israel-Hamas conflict. The attendees also noted that human detection of dis/mis/malinformation is more important than ever due to the ability of such falsified data to agitate entire societies. While AI is fast emerging as the technological focus of these processes, the key factor is a mostly human role in misinformation, disinformation and malinformation use and detection. Therefore, the attendees urged MENA countries and the NATO Allies to foster collaboration, promote AI 10
literacy, and inform young people who are currently very vulnerable to the spread of such falsified data. It was noted that the spread of misinformation in the MENA region is especially targeting teenagers. Therefore, the civil society is often a source of such dis/mis/malinformation, given certain social, ethnic, religious, racial or other preferences. Often it is journalists themselves who fact check data. It is therefore essential to train such media professionals and the young generation in media literacy. Such training should also involve parents who can guide their children initially to distinguish between true and falsified information. The participants noted that certain countries with advanced AI capabilities are providing both artificial intelligence-based and legacy media training. Therefore, NATO is expected to expand cooperation with the MENA region and include training language for international media practitioners. At the same time, the participants expressed concern that large language models (LLM), which are advancing rapidly, could contribute to mis/disinformation, wondering whether the future Internet will be toxic. Finally, the participants urged the development and fostering of critical thinking to distinguish between reality and fake data, while pointing out that new technological breakthroughs that enable AI to generate content without human supervision is a growing problem. Therefore, there is a real and pressing need to educate society, necessitating cooperation between the regions, technology developers and practitioners, while promoting education and critical thinking among the young people and the next generation AI talent.
Conclusions and recommendations Overall, the presentations, debates and discussions reflected the following broad themes. First, the security in the MENA region is impacted by the technology use, abuse and transfer between non-state and terrorist actors, given that security concerns still dominate across parts of the region, and especially in Africa. The advanced tools like AI can potentially become a significant threat and an enabler in the hands of capable and determined users, especially as technology matures over time. Second, training and growing the next-generation AI talent and workforce is key to overall MENA development – this topic was a common theme throughout the workshop. This technology education is necessary to bridge the gap between developed and developing societies across the region, and to prepare the populations for potential shocks as advances in AI could affect and displace the workforce. This is also where the combined technological, academic and financial strength of the NATO organization can be helpful in providing resources and transferring knowledge and lessons learned to MENA partners. Third, MENA is a vast and complex region – its sheer size and diversity is both the driver and a limit to progress and development. When discussing Africa in particular, the participants noted that states with weak governments and governance are not able to foster the same AI development and use atmosphere that exists in more stable and prosperous Middle Eastern countries. There is therefore a significant need to foster closer and more productive collaboration between the NATO states and the MENA countries that revolves around the sharing of expertise, experience, and lessons learned, especially in the context of developing talent capable of using AI in a peaceful and productive manner. Finally, much discussion was dedicated to the need for better education and 11
awareness to combat misinformation, disinformation and malinformation that has become a staple across parts of the region. This is where some of the more advanced programs in NATO states can be helpful as examples of how such efforts should be conducted. Overall, the consensus during the workshop was the need for greater cooperation between NATO as an organization and MENA as a diverse region, concentrating on the human element as an underlying factor for artificial intelligence development, use and implementation.
1 NATO Strategic Direction-South Hub summary of the November 7-9, 2023 event,
https://www.linkedin.com/company/nsd-s-hub/posts/?feedView=all. 2 NATO Strategic Direction-South Hub summary of the November 7-9, 2023 event, https://www.linkedin.com/company/nsd-s-hub/posts/?feedView=all. 3 Hallucination in a foundation model (FM) refers to the generation of content that strays from factual
reality or includes fabricated information. A Survey of Hallucination in Large Foundation Models, https://arxiv.org/abs/2309.05922. 4 Emiratization Wikipedia page. https://en.wikipedia.org/wiki/Emiratisation. 5 Li Ang Zhang, Gavin S. Hartnett, Jair Aguirre, Andrew J. Lohn, Inez Khan, Marissa Herron, Caolionn O'Connell, “Operational Feasibility of Adversarial Attacks Against Artificial Intelligence,” RAND.org, 2022, https://www.rand.org/pubs/research_reports/RRA866-1.html. 6 Tony Ho Tran, “Bank robbers steal $35 million by deepfaking boss's voice,” Futurism.com, October 17, 2021, https://futurism.com/the-byte/bank-robbers-deepfaking-voice. 7 TITAN program webpage, https://www.titanthinking.eu/. 8 TITAN program webpage, https://www.titanthinking.eu/.
12