Giuliano Liguori provides strategic insights into digital transformation, revealing the pathways to drive business innovation. Flavio Queiroz Dissects the evolving landscape of cyber threats, offering a comprehensive analysis of the challenges and defenses of our digital age. Paolo Falconio analyzes Rosatom’s nuclear strategy, providing a critical understanding of geopolitical power dynamics. Dr. Igor van Gemert a leading expert in cybersecurity and disruptive technologies, shares his insights on the critical threats of shadow AI and data poisoning in this edition, while Prof. Ahmed Banafa delves into the future of IoT, Blockchain, and AI, and their transformative impacts, Milan Vego, a distinguished strategist from the US Naval Command, provides his expert analysis on maritime security and strategic defense in this edition. In this issue, we are honored to feature a special space report from Giancarlo Elia Valori, which delves into the latest innovations in aerospace technologies.
In addition to these profound contributions, we proudly continue to promote our bold declaration of AI as the Person of the Year 2024. This recognition underscores AI’s unparalleled influence in shaping our present and future.
At Inner Sanctum Vector N360™, we are not just keeping pace with change; we are leading it. Our commitment to pushing the boundaries of what is possible remains steadfast. We are thrilled to have you join us on this journey of exploration and discovery. Together, let’s embrace the future, challenge the status quo, and innovate beyond the imaginable.
We are living in complex times, of continuous emergencies and change, and we cannot afford the luxury of words of circumstance, of lofty principles that have never been realized, of easy choices in the place of the right ones.
We need to look back at the profound meaning of what gave life to this place, the Community of Nations and Peoples that are reflected in the United Nations Charter of 1945, which was created to find shared solutions that could guarantee peace and prosperity.
There are basically two, fundamental premises that give meaning to these halls.
On one hand,
there are Nations that exist because they reflect humankind’s innate need to feel a sense of belonging to a community, to a certain people and to be able to share with others the same historical memory, the same laws, the same customs and traditions. In a word: one’s identity.
On the other hand,
there is the aspiration of these Nations, that are different from each other, to find a place where to resolve international disputes through an instrument that may be more difficult to use, but is definitely more effective than resorting to force, that is, the instrument of Reason.
If these two premises, the Nation and Reason, are still the foundation of our action, then we must reject the
utopistic and self-serving narrative of those who say that a world without Nations, without borders and without identity, would be a world without war and conflict. Just as fiercely, we must thwart the return to the use of force as a tool to resolve international conflict.
Russia’s war of invasion of Ukraine tells us precisely this: that over those who want to take us back to a world of dominion, neoimperial wars, which we thought we had done away with in the past century, Reason can still prevail and the love of Country, the value of the Nation, can still be safeguarded beyond the unimaginable.
It’s up to us, each and every one of us, to decide on what side of history we want to stand, in good conscience. But let’s not fool ourselves, because this is what is at stake: the choice between Nation and Chaos, and between Reason and Prevarication.
Italy made a clear choice as to where it
stands. It did so out of a sense of justice and because it is aware of how difficult it would be to govern a world, in which the upper hand is given to those who bombard civilian infrastructure hoping to bring a people to its knees with cold and darkness, to those who weaponize energy and blackmail developing nations, blocking
exports of grain - the raw material needed to feed millions of people.
The repercussions of the conflict in Ukraine overwhelm us all like a domino effect, but mainly impact nations of the world’s south. It’s a war waged not only against Ukraine but against the poorest Nations.
Italy’s attention is particularly focused on Africa, where Nations already beleaguered by long periods of drought and by the effects of climate change, are now faced with a situation compounded by food insecurity, making them more vulnerable to instability, and easier prey for terrorism and fundamentalism.
And this is a choice. To create chaos and spread it. And in this chaos that produces tens of millions of people potentially in search of better living conditions, are infiltrated criminal networks that profit from desperation to collect easy billions. =======
Neo-imperial wars refer to conflicts driven by modern practices of imperialism, distinct from traditional colonialism. Instead of direct territorial conquest and settlement, neoimperialism involves the exertion of economic, political, and military influence over other nations to control their resources, policies, and economies. Key features of neoimperial wars include:
• Economic Control: Dominant countries or corporations exert influence through economic means, such as controlling trade routes, investing in key industries, and leveraging debt to gain control over the policies of other nations.
• Military Intervention: While direct occupation is less common, military interventions, covert operations, and the use of advanced technology such as drones and cyber warfare play significant roles.
• Political Influence: Influencing or controlling political outcomes in other countries through diplomatic pressure, covert operations, or support for certain political factions or leaders.
• Cultural Dominance: Spreading cultural values and ideologies to create favorable environments for the dominant country's interests.
• Resource Exploitation: Extracting natural resources such as oil, minerals, and other valuable commodities from less powerful nations, often to the detriment of local populations and environments.
Overall, neoimperial wars are characterized by a more sophisticated and covert approach to control and influence, leveraging globalization, technology, and economic power rather than traditional military conquest and colonization.
They are the traffickers of human beings that organize the trade of illegal mass immigration. They deceive those who rely on them to migrate to find a better life, having them pay thousands of dollars for trips to Europe they sell with brochures, as if they were regular travel agencies, but those brochures don’t tell you that those trips all too often can lead to death, to a grave at the bottom of the Mediterranean Sea. Because they don’t care whether the boat used is unfit for that type of travel or not, the only important thing for them is the profit margin.
These are the people, who owing to a certain hypocritical approach to the issue of immigration, have become rich beyond measure. We want to battle against the mafia in all of its forms; and we will battle against this too. The fact is that the fight against
organized crime should be an objective that unites us all, and that also invests the United Nations.
Can an organization like this, which reaffirms in its founding act “…faith in the dignity and worth of the human person…” turn a blind eye to this tragedy?
Can we really pretend to not see that no other criminal activity in the world today is more lucrative than the trafficking of migrants – when it is the UN reports themselves that have shown how this business has reachedby volume of money – the same level of drug-trafficking and has largely
surpassed that of arms-trafficking?
Can this Assembly of the United Nations, which in other times was fundamental to definitively eradicating the universal crime of slavery, today tolerate its comeback under other forms, that the commercialization of human life continues, that there are
women brought to Europe forced into prostitution to repay the enormous debt they incur with their traffickers, or that there are men thrust into the hands of organized crime?
Can we really say that it is solidarity to receive, as a priority, not those who are truly entitled, but those who can afford
to pay these traffickers, to allow these criminals to establish who has the right to be saved and who doesn’t?
I don’t think so, and I believe it is the duty of this organization to reject any hypocritical approach to this issue and wage a global war without mercy against traffickers of human beings.
And to do so, we need to work together at every level. Italy plans to be on the front line on this issue.
With the Rome Process, launched in July with the Conference on Migration and Development, we have engaged Mediterranean and various African nations in a process that follows two main paths: defeat the slave-traders of the Third Millennium, and at the same time, tackle the root causes of migration, with the objective of guaranteeing the first of rights, that is the right of not having to emigrate, of not being forced to leave one’s home, one’s family, to cut off one’s roots, and being able to find in one’s own land the conditions to achieve one’s own
fulfillment.
Here too, we must have the courage to tell it like it is. Africa is not a poor continent. To the contrary, it is rich with strategic resources. It holds half of the world’s minerals, including abundant rare earths, and 60% of arable lands that are often not utilized. Africa is not a poor continent, but it has been often, and still is, an exploited continent. Too often the interventions of foreign nations on the continent have not respected local realities. Often the approach was predatory, and in spite of this fact - even paternalistic.
We must change course.
Italy wants to contribute to the construction of a model of cooperation capable of collaborating with African nations so they may grow and prosper from the great resources they possess.
A cooperation from equal to equal, because Africa needs no charity, but to be put in the condition to compete on an equal footing, on strategic investments that can tie our futures together with mutually beneficial projects.
In this way we can offer a serious alternative to the phenomenon of mass migration; an alternative that is work, training, opportunities for nations of origin and pathways for legal and agreed migration, and thus integratable.
We will be the first to set a good example through the “Mattei Plan for Africa”, a development cooperation plan named after Enrico Mattei, a great Italian who knew how to balance Italy’s national interests with the rights of Partner States to witness their own moment of development and progress. The focal point is that we have to have the courage to put humankind, and human rights, back at the center of our action. It seems like a self-evident principle, but it is no longer the case. Countries are invaded, wealth is more and more concentrated, poverty is rampant, the slave-trade is re-emerging – all of this seems poised to put the sacredness of the human being at risk.
Even what would seem, at a super ficial glance, a tool that could improve the well-being of humanity, at a closer look can turn out to be a risk.
Just think of artificial intelligence. The applications of this new technology may offer great opportunities in many fields, but we cannot pretend to not understand its enormous inherent risks.
I am not sure if we are adequately aware of the implications of technological development whose pace is much faster than our capacity to manage its effects.
We were used to progress that aimed to optimize human capacities, while today we are dealing with progress that risks replacing human capacities.
Because, if in the past, this replacement focused on physical tasks, so that humans could dedicate themselves to intellectual and organizational work, today the human intellect risks being replaced, with consequences that could be devastating, particularly, for the job market. More and more people will no longer be necessary, in a world everdominated by disparities, by the concentration of power and wealth in the hands of the few.
This is not a world we want – which is why we should not mistake this dominion for a free zone without rules.
We need global governance mechanisms that ensure that these boundaries; that technological evolution is put at the service of humanity and not vice versa. We must guarantee the practical application of the concept of “Algorethics”, that is,
ethics for algorithms. These are some of the major themes Italy plans to put at the center of the G7 in 2024. But these are mainly issues that are the responsibility of the United Nations.
These are enormous challenges that we will not be able to tackle if we do not also acknowledge our limitations, as nations and as part of the multilateral system. For this reason, Italy supports the need for a reform of the Security Council that will make it
more representative, transparent and effective. A Council that can guarantee a fairer geographical distribution of seats and that can strengthen regional representation as well; that emerges from an order frozen in time, established by the outcomes of a conflict that ended eighty years ago, in another century, in another millennium, so that everyone has the opportunity to demonstrate their worth at the present time.
On these and many other issues, we will be tested in our capacities to govern our times, and in our ability to do what here in this assembly hall, on 2 October 1979, a great man, saint and statesman, Pope John Paul II, recalled: that is, that political activity, whether national or international, comes from “the human being”, is practiced “by the human being” and is meant for the “human being”.
Prime Minister Giorgia Meloni
We'd like to extend our gratitude to Prime Minister Giorgia Meloni and the Embassy of Italy for their invaluable support and contributions to this article.
Giorgia Meloni is a prominent Italian politician and journalist. She has been serving as the Prime Minister of Italy since October 22, 2022, marking a historic milestone as the first woman to hold this position in the country's history.
Meloni entered politics at the young age of 15, joining the Youth Front of the Italian Social Movement (MSI), a neo-fascist political party. She quickly rose through the ranks, becoming the national leader of the National Alliance's (AN) student movement.
By 21, she was elected as a councillor of the Province of Rome and later, at 29, she became a member of parliament.
Her political career reached a significant milestone when she was appointed Minister for Youth at age 31 in Silvio Berlusconi's government, making her the youngest minister in postwar Italian political history.
In 2012, she co-founded the Brothers of Italy (Fratelli d’Italia), a right-wing political party, and has been its president since 2014. Meloni's leadership has been characterized by her nationalist and populist views, often compared to the US Republican Party and the UK's Conservative Party. Her political stance emphasizes patriotism, family values, and a critical view of political correctness and global elites.
As Prime Minister, Meloni has focused on several key issues, including the reform of the United Nations Security Council to make it more representative and effective. She advocates for global governance mechanisms to ensure that technological advancements, particularly in AI, are aligned with ethical values and serve humanity rather than causing harm.
ROSATOM HEGEMONYRUSSIAN CIVIL NUCLEAR POWER
PAOLO FALCONIO, ADVISOR FOR THE BRUSSELS MISSION UNIT OF THE ITALIAN MINISTRY FOR COMMUNITY POLICIES (EU)
For those who may be wondering what Rosatom is, let us clarify: this Russian enterprise is currently the world’s leading nuclear energy company.
The issue of Russian’s hegemony in the nuclear sphere is relevant for a wide range of factors. First and foremost because despite the sanctions imposed by the West, Russia continues to be a major player in the energy sector, including in Gas & Oil.
Without going into detail on the latter two energy sources, it will be enough to say that Western sanctions, while impacting revenues, are very deficient and ineffective.
In fact, Russia continues to export through third-party countries
taking advantage of the free circulation of crude and refined products in countries that have not joined the sanctions, and remain a significant part of the market.
In addition to this, the $67 per barrel limit imposed by the EU is circumvented through a flotilla of ships of foreign companies or even companies specially formed with the purpose of selling the product at much higher prices.
There is no shortage of buyers. Indeed, the latest reports indicate that Russia not only has China as a buyer of Gas & Oil, but has also greatly diversified its buyer portfolio by disengaging from its dependence from Beijing existing so far.
What is special about the atomic energy sector is that it is not subject to any sanctions, and this becomes relevant on various levels, starting with the EU: nuclear-related investments have been included in the Union's taxonomy for sustainable activities. This means that the EU could provide support for projects in the nuclear sector through its sustainable finance package. In addition, we have to consider that nuclear energy is the cheapest form of energy, second only to the use of coal.
This includes Rosatom, which was founded in 2007 as the heir to the Federal Atomic Energy Agency. The company controls the entire cycle of expertise in the nuclear sector, from uranium mining to the construction and operation of nuclear power plants, including the treatment and storage of spent fuel.
The Russian Federation holds almost half of the global Uranium enrichment capacity for use as nuclear fuel.
40 percent of the nuclear power produced in Europe depends on Russian uranium, or on uranium from Kazakhstan and Uzbekistan, both of which are nonetheless closely tied —with important distinctions— to the Kremlin.
Add to this that the recent coup in Niger, which supplies 25 percent of uranium to Europe, has created additional problems. Niger itself has significant ties to Russia, and the situation demands the utmost attention. This need for attention is also justified by the fact that Russia is the only economically viable supplier of highdose, low-enriched uranium (HALEU) needed to fuel the new generation of advanced reactors.
There are many Western industry analysts who believe that 60 percent of global uranium production is under the direct or indirect control of Moscow. Ultimately in 2022 the World Nuclear Association estimated that global demand for uranium will grow 27 percent by 2030. In addition to the above, the Kremlin invests significant sums in the development of new reactor technologies, particularly safe plants using fast neutron reactors, mixed oxide fuel or MOx (a mixture of plutonium oxides and uranium), and the closed fuel cycle, which would allow radioactive waste to be removed from power generation.
With its proven technology, Rosatom has acquired a large portfolio of foreign orders. In 2022 between reactor construction and suppliance of enriched uranium, among other services.
Russia had a presence in 54 nations.
To give you an idea of the Russian Federation's near-hegemonic position in the sector, it is useful to point out that the U.S. itself relies on Russian-controlled supply chains for at least 37 percent of its nuclear fuel supplies (other estimates are closer to 50 percent), while Europe depends on them for 40 percent of its supplies.
Aseparate discussion then deserves procurement for the construction of new atomic power plants. This would be a sensitive discussion that concerns not only the so-called Global South of the world, but also NATO countries such as Turkey, which has just recently concluded an
agreement to build a nuclear plant, or Hungary, which has ordered the construction of two new reactors for the Paks nuclear power plant.
Equally interesting, given the United States’ focus on the Indo-Pacific and particularly on China is that Beijing, which also aims to become the major player in the field has reached a number of agreements with the Russian Federation.
Despite China’s technological capabilities being comparable to those of the Russian Federation, the Kremlin still sees it in a subordinate position compared to itself.
China currently has 55 active reactors
based on foreign technologies such as the AP1000 and the EPR but also the Russian VVER-1000, the Canadian Candu 6 PHWR, the French M310, plus five other reactors developed locally or in collaboration with the countries involved .
The expansion plans of the atomic industry, is to activate the production of a further 26 reactors currently under construction by 2029. Only four are based totally on foreign technologies. Even more significantly, the government plans to double the number of its reactors in the next 15 years, essentially using only local technologies or those derived from significant partnerships. Last but not least, the China National Uranium Corporation (CNUC) and similar companies hold the rights to nearly
60 percent
of Kazakhstan's future uranium production.
In addition to the aforementioned nations (Turkey, Hungary, and China) Rosatom has built or is building reactors in India, Bangladesh, Belarus, Iran, and Egypt. Only in the latter the process suffered a setback, due to internal issues within Egypt arising from the outbreak of a bribery scandal. In addition, Russia will build a small nuclear power plant in Uzbekistan, the first project of a nuclear power plant in postSoviet Central Asia. According to documents published by the Kremlin, Rosatom will build up to six nuclear reactors with a capacity of 55 megawatts each one.
A much smaller-scale project than the 2.4-gigawatt one agreed in 2018 and yet to be finalized.
The relevant point from a strategic perspective is that the construction of the power plants takes about ten years beyond the necessary dependence for the maintenance of the plants themselves, and this is regardless of the supply of nuclear fuel. It is not difficult to see that the instrument lends itself to creating a real dependency relationship with important effects in terms of international politics, and in today's framework, in the policy of sanctions for the aggression against Ukraine. Suffice it to say that Hungary has already made it known that any assumption of sanctions in the area by the EU would force it to exercise its veto power and that the issue is not amenable to any negotiations.
In this framework, the West is lagging far behind, but it is not standing still. One of the major achievements is the realization of nuclear fuel compatible with Russian power plants (VVER) by Westinghouse. This fuel has allowed many former Warsaw Pact countries to free themselves from the obligation to source from Russia, since until recently their reactors operated only with nuclear fuel processed by Rosatom.
Another key progression point lies in the West's ability to develop advanced technologies. The United States alone has more than 60 companies working on advanced modular reactor technologies and has made agreements to distribute some of them to Poland and Romania.
The British company Rolls Royce is working to develop its own small modular reactor technology, and has signed a memorandum of understanding with the U.S. utility Exelon as well as entities in the Czech Republic. In January 2023, even the EU started co-financing the Accelerated Programme for the Deployment of Safe VVER Fuel Supply (APIS), which is expected to develop fully European-produced nuclear fuel for VVER reactors (Rosatom reactors).
All of this, however, will take time and may not be enough considering Rosatom's major strength-its ability to provide all-in-one packages that encompass the entire nuclear production cycle, with flexible business models, attractive financial packages (through Russian state support), and diplomatic support that allows it to be able to make government-backed special bids, such as in Turkey (Akkuyu) with a new Business Model (BOO Build–Own–Operate ) where the costs of building the power plant are borne by the Russia, but the Russian company retains majority ownership of the plant and a guaranteed price on electricity sales.
Another example is Hungary, which received $10 billion in funding for reactor construction. These policies have enabled Rosatom and Russia to overtake Western competitors such as Framatome, Mitsubishi, Siemens, and Westinghouse, which normally require solid financial guarantees and customer partnership agreements as part of their standards.
In the final analysis, if the West wants to become competitive, it cannot limit itself to investment in high technology (though essential), but will also have to spend diplomatically to build partnerships that make it a politically and economically competitive interlocutor.
One final point, just for completeness, Rosatom is responsible for Russia's nuclear weapons division, in addition to ensuring nuclear and radiation safety.
One does not have to be a statesman to understand that nuclear power is synonymous with geopolitical power, and this of course makes it doubly relevant strategically to the ambitions of the Russian Federation and the current tenant of the Kremlin.
“ Finally, a personal consideration: a West satiated with its abundance, has slowly slipped into a dream reality that ostracizes nuclear power, and now we are forced to chase in order to catch up.
The game for 4-generation nuclear power will be played domestically, showing the public that industries (and thus jobs) as well as their refrigerators and elevators cannot be powered by dreams”.
Paolo Falconi
Paolo Falconio is a distinguished academic, policy advisor, and consultant with a profound impact on European policy and geopolitical analysis. His career is marked by influential roles in academia, advisory positions within the Italian government, and contributions to national and international publications.
Faculty Member at Luiss University Business School, Rome (2012Present): Paolo Falconio joined the esteemed Business School of the Libera Università Internazionale degli Studi Sociali Guido Carli (Luiss) in 2012. Luiss University is renowned for its high academic standards and innovative programs in law, business, and political science. Falconio's involvement has been instrumental in shaping the curriculum and fostering a collaborative learning environment.
Advisor for the Brussels Mission Unit of the Italian Ministry for Community Policies (EU): Falconio's expertise in European law and policy-making led to his appointment as an advisor for the Brussels Mission Unit. In this role, he has been pivotal in shaping policies that impact the European Union, leveraging his deep understanding of legal frameworks and geopolitical dynamics.
Member of National Study Commissions under the Prime Minister’s Office: Falconio has played a significant role in the application of European law through his involvement in national study commissions. His contributions have directly influenced national policies, showcasing his ability to navigate complex legal and regulatory landscapes.
Consultant for American Investment Funds: Falconio's versatility extends to the financial sector, where he has worked as a consultant for American investment funds. His insights and expertise have provided valuable guidance in navigating the European market.
SOURCE ISPI - Istituto per gli Studi di Politica Internazionale Royal United Service Institute (London) Climate and Energy Research Group, Norwegian Institute of International Affairs (NUPI), Oslo. Rivista Energia - Geopolitica dell'Energia Reuters geoplitica.info
THE RISE OF AI ENGINEERS: ARCHITECTS OF A NEW ERA
KEN HUANG, CISSP
Introduction
In recent years, we've witnessed a meteoric rise in the importance of AI engineers. As industries across the board clamor for intelligent systems and automation, AI engineers have emerged as a force of reckoning behind designing, developing, and deploying these AI-powered solutions. These are the true pioneers of a new era, wielding their expertise in machine learning, deep learning, and data science to craft applications that have the potential to revolutionize businesses and fundamentally alter how we interact with technology.
The Role of AI Engineers
The exponential growth of AI engineering is a direct reflection of the ever-expanding capabilities of artificial intelligence and its accelerating integration into the fabric of our daily lives. AI engineers are the bridge between cutting-edge AI research and real-world applications.
Conceptualizing AI Solutions:
AI engineers don't just code; they think strategically. They delve into the specific needs of a business or industry to identify areas where AI can provide significant value. This involves understanding the problem space, potential datasets, and desired outcomes.
Designing and Building AI Systems:
Once the concept is clear, AI engineers
design the architecture of the AI system. They select the most appropriate models, determine the data requirements, and oversee the construction of the system.
Training and Fine-Tuning AI
Models: The heart of any AI system is its machine learning model. AI engineers are responsible for training these models on massive datasets, refining them to achieve optimal performance and accuracy.
Deployment and Integration:
A welldesigned AI model is only useful if it can be seamlessly integrated into existing systems. AI engineers handle the deployment process, ensuring the AI solution functions smoothly within the designated environment.
Performance Monitoring and
Maintenance: The work doesn't stop after deployment. AI engineers continuously monitor the performance of the AI system, identify areas for improvement, and perform maintenance to ensure its long-term effectiveness.
Generative Power at the Forefront:
Gone are the days of AI solely focused on analysis and prediction. Generative AI, with its ability to create entirely new content, is pushing the boundaries of what's possible. LLMs, like me, can generate human-quality text, translate
languages, write different kinds of creative content, and answer your questions in an informative way. Diffusion models, on the other hand, excel at creating realistic images and other forms of media from scratch, based on text descriptions.
This potent combination is a game-changer for AI engineers. Imagine applications that can:
• Craft personalized experiences: LLMs can personalize user interfaces, generate dynamic marketing copy, or even write custom scripts for chatbots, all tailored to individual needs.
• Design innovative products: Diffusion models can create realistic prototypes and mockups, allowing for faster iteration cycles and more efficient product development.
• Revolutionize content creation: LLMs can generate scripts, poems, musical pieces, or even code, sparking a new era of creative exploration.
The rise of generative AI doesn't diminish the crucial role of AI engineers. They are the architects who translate this immense potential into tangible applications. Their expertise in data science, machine learning, and
now generative models equips them to:
• Fine-tune the creative engine: AI engineers ensure these powerful models generate outputs that are relevant, accurate, and aligned with user intent.
• Integrate seamlessly: They build the infrastructure that connects these generative models with existing systems, creating a smooth user experience.
• Maintain and optimize: AI engineers constantly monitor and refine these applications, ensuring they continue to deliver exceptional results.
No Code, Low Code Movement Powered by AI
One of the most significant trends in the tech industry is the rise of no-code and low-code platforms, which are revolutionizing how applications are developed. These platforms, powered by AI, enable users to create sophisticated software without extensive coding knowledge. This democratization of software development is empowering a broader range of individuals to participate in the digital transformation.
Low Code Tools Examples
Low-code tools like Microsoft PoweApps, OutSystems, and Mendix are becoming increasingly popular. These platforms offer pre-built components and templates that can be
easily customized to meet specific needs. With intuitive drag-and-drop interfaces, users can quickly assemble applications, reducing the time and cost associated with traditional software development.
No Code Tools Examples
No-code tools such as Bubble, Adalo, and Glide are designed for even those with minimal technical skills. These platforms provide user-friendly interfaces that allow anyone to create web and mobile applications by simply configuring visual elements. This accessibility is fostering innovation across various sectors, from startups to large enterprises.
One Click Tool for AI Engineers
AI engineers are also benefiting from tools that streamline their workflows.
One-click tools like Google AutoML and Microsoft Azure's Automated ML allow engineers to build and deploy machine learning models with minimal manual intervention. These tools automate much of the complex processes involved in model training and deployment, enabling engineers to focus on refining algorithms and improving model performance.
AI Agents
AI agents are autonomous systems that can perform tasks on behalf of users. They are becoming increasingly
sophisticated, capable of learning from interactions and adapting to new environments. AI engineers are leveraging these agents to build powerful applications that can operate independently, making decisions and taking actions based on real-time data.
The Core Attributes of AI Agents
AI agents possess several core attributes that make them valuable in various applications. These include autonomy, the ability to learn and adapt, and the capacity to interact with other systems and users.
These attributes enable AI agents to function effectively in dynamic and complex environments, providing solutions that are both efficient and scalable.
How AI Engineers Leverage AI Agents to Build Powerful AI Apps?
AI engineers use AI agents to create applications that can handle tasks ranging from customer service to complex data analysis. By embedding AI agents into their applications, engineers can enhance functionality and provide more responsive and intelligent user experiences. These agents can continuously learn and improve, ensuring that the applications remain relevant and effective over time.
What
kind of Apps
can be developed using AI Agents?
The potential applications of AI agents are vast and varied. They can be used to develop customer service chatbots, personal assistants, automated trading systems, and more. In healthcare, AI agents can assist in diagnosing diseases and recommending treatments. In finance, they can analyze market trends and execute trades. The possibilities are endless, limited only by the creativity and expertise of AI engineers.
AI Agent Tools and Frameworks
Several tools and frameworks are available to assist AI engineers in developing AI agents. These include TensorFlow Agents, OpenAI's Gym, and Microsoft Bot Framework. These tools provide the necessary infrastructure and libraries to build, train, and deploy AI agents, simplifying the development process and enabling engineers to focus on creating innovative solutions.
App Stacks
To support the development and deployment of AI applications, AI engineers rely on a robust app stack that includes various tools and frameworks for data processing, infrastructure management, and quality tuning.
Data Processing, Ingestion, and Storage Tools
Data is the backbone of AI applications. Tools like Apache Kafka, Apache Nifi, and Google Cloud Dataflow are essential for processing, ingesting, and storing vast amounts of data efficiently. These tools enable AI engineers to handle data from multiple sources, ensuring that their models have access to high-quality, real-time information.
Infrastructure and Compute Resources Tools
AI applications require significant computational power. Tools like Kubernetes, Docker, and Google Kubernetes Engine (GKE) provide the necessary infrastructure to manage and scale AI workloads. These tools allow AI engineers to deploy their applications in cloud environments, ensuring that they can handle the computational demands of training and running AI models.
Quality Tuning Tools
Ensuring the quality of AI models is crucial for their success. Tools like MLflow, Weights & Biases, and TensorBoard are used to track experiments, visualize metrics, and tune model parameters. These tools help AI engineers optimize their models, improving their accuracy and performance.
Vector Databases
Vector databases such as Pinecone, Milvus, and Faiss are specialized for storing and querying highdimensional data. They are particularly useful for applications that involve similarity search, such as recommendation systems and image recognition. These databases enable AI engineers to efficiently manage and query large datasets, enhancing the performance of their applications.
LLM Model Providers
Large Language Models (LLMs) are at the core of many AI applications. Providers like OpenAI, Google AI, and Hugging Face offer pre-trained models that AI engineers can use to build natural language processing (NLP) applications. These models can be finetuned for specific tasks, enabling engineers to develop sophisticated language-based solutions.
LLM Service Providers
In addition to model providers, there are service providers like Microsoft Azure, Amazon Web Services (AWS), and Google Cloud that offer LLMs as a service. These services provide APIs and tools that simplify the integration of LLMs into applications, allowing AI engineers to leverage powerful language models without the need for extensive infrastructure.
Orchestration Tools Providers
Orchestration tools like Langchain, Semantic Kernel, AutoGen, and Ray are essential for managing complex AI workflows. These tools enable AI engineers to automate the execution of tasks, ensuring that their models are trained, deployed, and monitored efficiently. Orchestration tools streamline the development process, allowing engineers to focus on innovation.
Embedding Models Providers
Embedding models are used to convert data into a format that AI models can understand. Providers like SentenceBERT, Universal Sentence Encoder, and FastText offer pre-trained embedding models that AI engineers can use to
process text, images, and other types of data. These models are crucial for tasks like text classification, sentiment analysis, and image recognition.
Generative AI Security Tools
As AI applications become more prevalent, security is a growing concern. Tools like IBM Watson for Cyber Security, Darktrace, and Vectra AI are designed to protect AI systems from threats and vulnerabilities. These tools use AI to detect and respond to security incidents, ensuring that AI applications remain secure and reliable.
Conclusion
The Practical Guide for AI Engineers aims to provide knowledge and inspiration for pursuing the exciting domain of AI engineering, applying the power of generative AI to produce innovative and impactful applications.
Ken Huang:
Pioneering Visionary in AI and Cybersecurity
Ken Huang, a distinguished alumnus of Harvard Kennedy School, stands at the forefront of AI and cybersecurity innovation. With a CISSP certification and an illustrious career, Huang has authored eight groundbreaking books on AI and Web3, solidifying his status as a leading authority in these transformative fields.
Huang serves as the Co-Chair of the AI Organizational Responsibility Working Group and AI Control Framework at the prestigious Cloud Security Alliance. His leadership in these roles underscores his commitment to advancing responsible AI development and robust cybersecurity frameworks. As the Chief AI Officer at DistributedApps.ai, Huang spearheads initiatives that set new standards in generative AI security, shaping the future of technology.
A prolific contributor to key industry standards, Huang has played an instrumental role in the development of OWASP's Top 10 Risks for LLM Applications and the NIST Generative AI Public Working Group.
His expertise has led to the creation of the "Generative AI Application Security Testing and Validation Standard," a benchmark for AI security.
Huang's thought leadership extends to his widely acclaimed publications, including "Generative AI Security: Theories and Practices" and the coedited volume "Beyond AI: ChatGPT, Web3, and the Business Landscape of Tomorrow." His visionary insights into AI and cybersecurity are sought after at premier global conferences such as Davos WEF, ACM, IEEE, and CSA AI Summit.
Recognized for his profound impact on the industry, Ken Huang continues to drive innovation and excellence, making him an indispensable voice in the evolving landscape of AI and cybersecurity. His contributions not only enhance the depth of our content but also equip our readers with the knowledge to navigate the complexities of the digital age.
CONVERTING A POLITICAL- TO A MILITARY-STRATEGIC OBJECTIVE
IN-DEPTH ANALYSIS
MILAN VEGO, ADMIRAL R.K. TURNER
PROFESSOR OF OPERATIONAL ART AT THE U.S. NAVAL WAR COLLEGE
Political objectives are usually achieved by using one’s military power. Converting political objectives into achievable military-strategic objectives is the primary responsibility of military-strategic leadership. This process is largely an art rather than a science. There are many potential pitfalls because much depends on the knowledge, understanding, experience, and judgment of military-strategic leaders. Most often, mistakes made are only recognized after setbacks or defeats suffered during the hostilities. Despite its critical importance, there is no consensus on the steps and methods in converting political- into military strategic objectives. There is scant writing on the subject in either doctrinal documents or professional journals.
Political vs. Military Objectives
Any war is fought to achieve certain political objectives, which may be described as securing important national or alliance/coalition interests in a certain part of a theater. When aimed to achieve national interests, a political objective is strategic in scale. Its accomplishment could have a radical effect on the course and outcome of a war. In his seminal work On War, Carl von Clausewitz (1780–1831) wrote that “no one starts a war— or rather, no one in his senses ought to
do so—without first being clear in his mind what he intends to achieve by that war and how he intends to conduct it. The former is the political purpose; the latter its operational objective.” He observed that “the political object—the original motive for war—will thus determine both the military objective to be reached and the amount of effort it requires.” Political objectives may be purely political. However, they are often combined with ideological, geopolitical, economic, financial, social, ethnic, and religious objectives.
Amilitary-strategic objective is ending the enemy’s organized resistance and thereby achieving a major part of a given political-strategic objective. Yet the entire political objective is not accomplished unless military-strategic success is consolidated during the post hostilities (or stabilization) phase of a war. A military-strategic objective must always be subordinate to a given political objective. The British theoretician B.H. Liddell Hart cautioned that political leadership must make sure that political objectives of a war are achievable with military means that are currently or will soon be available. He warned that policy should “not demand what is militarily—that is, practically, impossible.” The “war aims must be adopted to limitations of strength and policy.”
In the case of continent-size countries, such as the United States or the Russian Federation, or of the oceanic theaters (for example, the Atlantic or Pacific), the possibility exists of having a war in two or more war theaters. Then, for each of them, a single military-strategic objective must be determined. In World War II, the United States had two national militarystrategic objectives: unconditional surrender of the Axis powers in Europe (Nazi Germany and Italy) and in the
Pacific region (Imperial Japan). Then, in each theater of war, there would also exist two or more theater-strategic objectives—whose accomplishment would result in the destruction of a major part of the enemy forces and then set conditions for a post hostilities phase in a given theater of operations. Their accomplishment would have a radical effect on the course and outcome of a war in a given theater. It would also signify a major phase in a war. In a theater of operations with a large population and developed infrastructure (“developed” theater), such as was Western Europe in World War II or the Iraqi theater of operations in 2003, the theater-strategic objective is subordinate to a given political objective (which, in turn, is subordinate to the national or alliance/coalition political-strategic objective). In contrast, in a sparsely populated theater with little or no infrastructure (“undeveloped” theater), as were the Solomons, central Pacific, and Papua New Guinea in World War II, the theater-strategic objectives would be predominantly or exclusively military.
In the offensive phase of the war in the Pacific (after August 1942), the Allies had in the Pacific Ocean area three theater-strategic objectives: defending Alaska and the Aleutians, capturing the Solomons archipelago, and capturing
the Japanese strongpoints in the central Pacific. In the Southwest Pacific area, the Allies had two identifiable theater strategic objectives: capturing Papua New Guinea and the Philippines. The final theater-strategic objective for the Pacific Ocean area command was capturing/neutralizing the southern approaches to the home islands (Formosa, Iwo Jima, and Ryukyus) and then, jointly with the Southwest Pacific area’s forces, assaulting and occupying the home islands. This part of the theater-strategic objectives was made unnecessary after atomic bombs were dropped on Hiroshima and Nagasaki in August 1945.
Prerequisites
Among the main requirements for determining a realistic military-/ theater-strategic objective are sufficient military capabilities, sound prediction of the duration of a war, accurate strategic intelligence, and realistic political/military assumptions. The accomplishment of a militarystrategic objective is predicated on having sufficient military capabilities. The greater one’s numerical/qualitative superiority, the more ambitious the military-strategic objectives that might be accomplished. For German Field Marshal Helmuth von Moltke, Sr., the
main requirement for a war was numerical superiority of the Prussian armies. This was achieved by general conscription. Moltke’s aim was to defeat an enemy army in a “single powerful blow.” At the same time, the importance of numerical superiority should not be overstated. Experience shows that in many cases, superior numbers are of no avail.
In evaluating overall strength of friendly and enemy forces, a great deal of attention must be paid to intangible elements, such as morale and discipline, will to fight, skills of the leaders, and soundness of doctrine. These factors are often more critical than numerical strength. Sometimes, the spiritual strength of an army may balance other deficiencies. The influence of a single personality may also greatly enhance the capabilities of the entire army and even the entire state. Experience shows that numerically weaker forces could often defeat a much larger force because of the better quality of their leaders and the better training, morale, and discipline of their troops. In Germany’s invasion of France, Belgium, the Netherlands, and Luxembourg in May 1940, for instance, the ratio of attacker to defender was 0.7 to 1, or 3,740,000 Allied soldiers (including 2,240,000
French troops) facing 2,760,000 Germans. The Allies had a 3-to-2 superiority in artillery pieces. However, France had only 3 armored divisions (plus 1 more created during the campaign) against Germany’s 10 panzer divisions. The German success in that campaign was due more to much higher quality of leadership, doctrine, combat training, and morale/ discipline than to materiel.
In some cases, as the war on the Eastern Front in 1941–1945 illustrates, the sheer number of troops, tanks, guns, and aircraft is simply overwhelming, no matter what the skills of the commanders and rank and file, morale and discipline, or training and soundness of doctrine of the opposing force. The Germans had assigned 145 divisions (including 19 panzer divisions and 14 infantry motorized divisions) with 3.2 million men (out of a total 3.8 million) for the invasion of Soviet Russia in June 1941.
They also had a small contingent of Romanian and Finnish forces, but the effectiveness of their equipment and combat was well below that of the Germans. The German Eastern Army (Ostheer) was superior in combat experience to the Red Army. Except for nine security divisions (SicherungsDivisionen), all other
German divisions were fully equipped with modern weapons. The training and confidence of the German troops were high. German leadership, especially at the operational level, was superior to leadership of the Red Army. The German high commanders were experienced in maneuvering large, motorized forces, and the individual German soldier was self-confident. The Germans believed that the element of surprise in launching the invasion would probably compensate for some of the German numerical inferiority.
In their invasion of Ukraine in February 2022, the Russians mobilized between 150,000 and 190,000 men. They faced initially a 250,000-man Ukrainian army.
The Russians employed seven combined arms armies and elements of two others plus one guards tank army. They also deployed airborne, naval infantry, and Spetsnaz light infantry around Ukraine’s borders.
The Russians not only had numerically inadequate forces to defeat and effectively control Ukraine—a country covering some 233,000 square miles (600,000 square kilometers) and with a population of about 41 million (in
January 2022)—but they also grossly underestimated the Ukrainian ability to use skillfully their smaller but better trained and highly motivated forces both in defense and on offense.
One of the most important factors in determining a military-strategic objective is to have a realistic assessment of the duration of a pending war. Ideally, this should be based on a consensus between military leaders and civilian security officials. Yet sometimes a single powerful ruler, as was Adolf Hitler or Joseph Stalin, and his inner circle might arbitrarily decide the duration of a pending war.
Major
pitfalls are the gross underestimation of the enemy’s capabilities and the will to fight. In his decision to invade Soviet Russia, Hitler expected that the entire campaign would not last more than 8 to 12 weeks. The German high command shared these views. So it was not surprising that for the Germans, the Soviet abundance of natural resources, number of divisions, tanks, aircraft, and vast distances could be safely disregarded. Although the German generals might not have had full knowledge of the Soviet capabilities, they still should have
known the limitations of their own forces. To achieve a decisive victory, they needed a much larger force in their Eastern Campaign. Yet the Germans started it with a force slightly larger than in their campaign in the West in 1940, especially in terms of numbers of panzers and aircraft.
Prior to the invasion of Ukraine on February 24, 2022, Russian leadership made an incorrect assumption about the duration of the war. Russian intelligence assumed that there would be no serious Ukrainian resistance, that some units with a Russian-speaking population would refuse to fight, and that the Russian population in the eastern provinces would welcome Russian troops as liberators.
A captured Russian document in March 2022 stated that by the 10th day of the invasion, the Russian forces would transit to stabilization operations. They would “proceed to the blocking and destruction of individual scattered units of the [enemy] Armed Forces and the remnants of the nationalist resistance units.” The Russian “special services” would be used for establishing occupation administration on the “liberated” territories.
In other cases, military leadership was correct in its assessment about the duration of the war but decided to open hostilities because of the anticipated negative trend in the correlation of forces.
In 1941, most of the Japanese high command assumed that a war with the Western powers would be long. Yet the longer Japan waited to initiate a war against the United States, the dimmer the prospects for success because of accelerated U.S. rearmament. This was especially the case in naval strength. In 1941, the Imperial Japanese Navy had some 70 percent of the tonnage of the U.S. Navy. However, the U.S. plan for a twoocean Navy in July 1940 called for a 70 percent increase in U.S. naval tonnage.
By 1943, the ratio for Imperial Japanese Navy to U.S. Navy would be reduced to 50 percent, and in 1944 to 30 percent. The Japanese were not realistic in their assumptions that by quickly capturing the central and southwestern Pacific and then fortifying these positions, they would force the Americans into a protracted island-by-island slog. They also erroneously believed that the cost of the struggle would be beyond America’s willingness to pay.
Optimally, one should possess accurate, timely, and relevant intelligence on the enemy’s militarystrategic capabilities. This is often not possible because there are so many variables involved in intelligence assessment—and intelligence is rarely perfect. Both accurate and inaccurate, and sometimes wide of the mark or misleading, statements are part of the same strategic assessment.
Exaggeration of friendly capabilities and underestimation of those of the enemy are common. The lack of good intelligence is often the reason for underestimating the enemy’s military capabilities, as the example of the Russian military in the Far East in 1904 illustrates. Russian commanders had only the barest of information concerning Japan. They had inaccurate numbers of divisions and capital ship dispositions. At the same time, Tsar Nicholas II and his inner circle had a strong belief that Japan would not dare take up arms against the all-powerful Russian army and navy.
One exception was General Aleksey P. Kuropatkin, minister of war, who did an inspection tour of East Asia from May to July in 1903. He reported that Russian forces were in a good state but
that the Japanese army was equally strong. Kuropatkin argued that war with Japan should be avoided at all costs. At an important meeting in Port Arthur in early July 1903, Kuropatkin’s views were endorsed. However, the war with Japan became inevitable after early August 1903, when Vice Admiral Yevgeni I. Alekseyev was appointed as a viceroy in the Far East with headquarters in Port Arthur. He maintained hard and unyielding policies during negotiations with Japan.
During planning for the invasion of Soviet Russia, the Germans greatly underestimated the numerical strength of the Red Army in western Russia. The Supreme Command of the Army (Oberkommando des Heeres, or OKH)’s intelligence department estimated that the Soviets deployed 147 divisions plus 39 to 40 independent brigades.
However, the Soviets deployed in four western military districts 180 divisions and 44 to 45 independent brigades. In January 1941, the OKH’s intelligence estimated the Red Army’s strength as 150 rifle divisions (including 15 motorized and 32 cavalry divisions and 36 motorized brigades). After mobilization, the Soviets would have a total of 209 divisions (107 rifle divisions in the first wave, 77 rifle divisions in the
second wave, and 25 rifle divisions in the third wave).
In the later phase of planning, the OKH’s intelligence estimated that the Red Army deployed in western Russia 213 divisions (including 25 divisions against Finland and in the Transcaucasus). In the area between the Baltic and the Black seas, 204 divisions (133 rifle divisions, 24 cavalry divisions, 10 tank divisions, and 37 motorized divisions) were deployed.
No estimates were made for the second wave of the Red Army’s strength after mobilization. The OKH’s intelligence believed that from the Asian theater the Red Army could bring in 38 divisions (25 rifle and 8 cavalry divisions and 5 motorized brigades) of the third wave. Some of them could be used against the Germans after the nonaggression pact with Japan was signed in April 1941. However, the Soviets had 303 divisions in June 1941, or 93 more than the Germans believed.
The Germans estimated that the Soviets had some 10,000 tanks, but the real number was 23,100; the number of aircraft was estimated as 6,000 (5,500 frontline), of which some 3,300 were deployed in western Russia. However, the Soviets had some 20,000 aircraft in
their inventory, including 9,300 in western Russia.
Another problem is the tendency to focus on the enemy’s intentions instead of its capabilities. This has probably been the cause of more major military failures than any other intelligence deficiency. It is common to make an error in estimating the enemy’s intentions because of one’s inability to think from the enemy’s frame of reference. The British made such an error in January 1940 regarding possible German landings in Norway. They firmly believed that the Germans would not intervene in Scandinavia if their iron ore imports were not endangered or if the Allies did not establish a naval base on the Norwegian coast.
In the absence of reliable information, military commanders and their staffs must make certain strategic assumptions that might be true or only partially true, or even entirely false.
Realistic military-strategic assumptions have a critical role in determining military-strategic objectives. Yet this is often not the case for a variety of reasons, such as wrong perceptions, racial prejudice, a sense of cultural
superiority, or relying on suspect historical precedents. In 1941, the Germans believed that the Soviets had a limited reconstitution and mobilization capacity and that they would get little support from the Western Allies.
The German perception of the poor state of the Soviet military was based on its experiences with the Russians in World War I and the Freikorps (Free Corps) fighting in the Baltics in 1919. The Germans were also influenced by the information the Japanese shared about interrogations of a high-ranking Soviet defector (General Genrikh S. Lyushkov, the Soviet secret service chief in the Far Eastern Army, who defected to the Japanese in June 1938). The German military was aware of Stalin’s purges of the Soviet officer corps in 1937–1938, and that led them (not unreasonably) to believe that the Soviet military was weak. The Germans also assumed that a surprise attack would lead to a swift victory. This wishful thinking led to a lack of planning for fighting in the Russian winter and for ignoring German logistical shortfalls. For his part, Stalin was well informed about the scale of the German buildup in the east but made a fatal error in believing that Hitler did not plan to attack.
In preparing for their invasion of Ukraine, Russian leaders made several false political and military assumptions. The Central Intelligence Agency director testified in early March 2022 that Vladimir Putin “was confident that he had modernized his military, and they were capable of quick and decisive victory at minimal cost.”
These assumptions possibly determined and imposed unrealistic objectives and timetables on the Russian military. The Russians also vastly underestimated the quality, morale, and determination of Ukraine’s armed forces—a clear evidence of hubris.
The Process
Ideally, the process of converting a political objective to a military-strategic objective should consist of several mutually related and consecutive steps. It should result in determining the main and alternative military- or theater-strategic objectives. In a war, one’s main strategic objective should not be too obvious. Liddell Hart observed that an alternative objective would provide “the opportunity of gaining an objective, whereas a single objective, unless the enemy is
helplessly inferior, means the certainty that you will not gain it—once the enemy is no longer uncertain as to your aim.”
The process should start by conducting a strategic estimate in a pending theater of war. That estimate is normally a part of the overall strategic estimate (that encompasses not only military but also nonmilitary aspects of a strategic situation).
Normally, a military-strategic estimate should encompass a thorough assessment of friendly, enemy, and neutral forces, plus the effect of the physical environment (terrain, oceanography, climate/weather) on their employment in combat. For both friendly and enemy forces, their strengths and weaknesses/ vulnerabilities should be identified and evaluated. Special attention should be given to intangible elements of both friendly and enemy forces.
Graphic: Advanced Military
Technologies: The integration of cutting-edge technology is crucial in modern military operations, enhancing both offensive and defensive capabilities.
For converting a military-strategic objective to theater-strategic objectives, an estimate of the military situation should be conducted for a given theater of operations. Then each theater-strategic objective should be in consonance with a given political objective in the respective theater of operations. The military-or theaterstrategic estimate should end with conclusions and recommendations (or lines of effort) for essential aspects of the military-strategic situation.
Military-strategic leadership must carefully analyze the content of political objectives issued by the highest politico-military leadership. The primary purpose is to identify those parts of political objectives that require the use of military force. Normally, one’s sources of military power would be used to obtain political or ideological dominance of a certain area, overthrow the enemy regime, change the enemy’s social system, or impose control of the enemy’s economic resources.
In the next step, the main purpose of a given political objective should be evaluated. Generally, an offensive political objective would require the accomplishment of offensive militarystrategic objectives. For their “first
operational stage of the war” in 1941–1942, the Japanese selected offensive military strategic objectives: to gain mastery of the Far East area by destroying U.S. power in the western Pacific and British forces in the Far Eastern waters and cutting their respective sea communications with these areas and land communications from India to China (the Burma Road). In November 1941, the central Japanese army-navy agreement specified that the war objectives were “reduction of foundation of U.S., British, and Dutch power in Eastern Asia, and occupation of Southern Areas.” The U.S. Joint Staff directive of July 2, 1942, to General Douglas MacArthur, Supreme Commander, Southwest Pacific Area, stated that his ultimate (theaterstrategic) objective was “seizure and occupation of New BritainNew Ireland–New Guinea area.”
Sometimes political-strategic objectives were offensive, but they were not supported by offensive military-strategic objectives. Russia’s political objectives in its war against Japan in 1904–1905 were clear: maintain control over Manchuria and decisively repel Japanese advances. Yet the Russian military-strategic objective was defensive: retain control of the positions they already held in Port Arthur, the Trans-Siberian Railway,
Vladivostok, and other concessions on the Yalu River. Russia’s proper military strategic objectives were destruction of the Japanese forces in Manchuria and obtaining/maintaining control of the Yellow Sea and the Sea of Japan.
Defensive military-strategic objectives are usually selected by the side on the strategic defensive. They could sometimes be combined with some preparatory measures to go on the offensive. The Combined Chiefs of Staffs directive to Admiral Chester W. Nimitz, Commander in Chief, Pacific Ocean Areas/U.S. Pacific Fleet, on March 30, 1942, stated the following objectives: a) Hold the island positions between the United States and the Southwest Pacific Area necessary for the security of the line of communications between those regions; and for supporting naval, air and amphibious operations against Japanese forces; (b) Support the operations of the forces in the Southwest Pacific Area; (c) Contain Japanese forces within the Pacific Theater; (d) Support the defense of the continent of North America; (e) Protect the essential sea and air communications; and (f) Prepare for the execution of major amphibious offensives against positions held by Japan, the initial offensives to be
launched from the South Pacific Area and Southwest Pacific Area.
Sometimes, the weaker side had a defensive political objective, but the only way of accomplishing it was by going strategically on the offensive. In the American Civil War (1861–1865), the Confederate states had a defensive political-strategic objective: force the Union to recognize Confederate independence. However, this could be accomplished only by selecting an offensive military strategic objective.
Like a political objective, a military-/ theater-strategic objective may be unlimited or limited. An unlimited objective would be selected if the political objective is to overthrow the enemy’s government and/or social system or capture a major part of the enemy’s territory. In a case of war between two strong opponents, accomplishing an unlimited military strategic objective would usually result in a long war requiring maximum exertion of all spiritual and material resources of a nation or an alliance/ coalition, as the war on the Eastern Front in 1941–1945 illustrated. In other cases, a much stronger side might accomplish its offensive and unlimited political-and military-strategic objectives relatively quickly, as the German invasion of Poland in
September 1939, Norway in April 1940, and Yugoslavia and Greece in April 1941 demonstrated.
In its invasion of Ukraine, Russia initially selected offensive and unlimited political and military-strategic objectives. Putin expected to capture Ukraine’s capital Kyiv quickly and install a compliant government. He reportedly believed that the Ukrainian military would be ineffective and that the Ukrainian political leadership could be easily replaced. Rapid takeover of Ukraine would present the West with a fait accompli.
In a war fought for limited political objectives, a military-/ theater-strategic objective would also usually be limited. One normally does not risk
all for limited political objectives, nor does one commit all his sources of power in such a war.
Accomplishing a limited military-/ theater-strategic objective would require low to modest use of military power, efforts, and time. A state might not need to pursue an unlimited military-strategic objective by trying to destroy the enemy’s forces and seek their surrender. Liddell Hart wrote that a state seeking not conquest, but the maintenance of its security would accomplish its military-strategic objective if the threat is removed and “if the enemy is led to abandon his purpose.” Or it “may desire to wait until the balance of forces can be changed by the intervention of allies or by referring forces from another theater.
It may desire to wait, or even to limit the military effort permanently, while naval or economic action decides
the issue.”
The Gulf War of 1990–1991 had limited political- and military-strategic objectives. The U.S.-led coalition never intended to defeat the Iraqi armed forces as a whole and occupy the entire Iraqi territory. The coalition objectives called for immediate, complete, and unconditional withdrawal of all Iraqi forces from Kuwait, restoration of the legitimate Kuwaiti government, the security and stability of Saudi Arabia and the Persian Gulf, and the safety and protection of American citizens abroad.
The United States aimed to remove Saddam Hussein by his domestic opposition but without endangering Iraqi territorial integrity. The coalition did not intend to defeat Iraq so completely that the ensuing power vacuum would be exploited by Iran and spark further turmoil there. The United States was unwilling to pursue its objectives directly and did not intend to be involved in the nation-building and humanitarian relief that would surely follow the overthrow of the Iraqi regime.
At the time, a serious disconnect existed between the more ambitious ends and modest means to be used by the United States and its coalition partners. Hence, it was not surprising that the termination of the Gulf War was not only confused and ambiguous but also had unintended and adverse consequences for U.S. national interests.
The geographical separation of centers of power of the opponents plays an important role in a war for limited military-/theater-strategic objectives. This is especially the case when there is a lack of an overland link between the two main belligerents due to an ocean or neutral states or if the land area is so large as to make it difficult or impossible for either belligerent to exert its full strength against the other. The Russo-Japanese War of 1904–1905 was a war with limited political- and military-strategic objectives for both sides. Both Russia and Japan disputed control of the area, which did not belong to either of them. Japan was unable to completely defeat Russia, but that was also unnecessary. This was also the case for Russia. Neither Japan nor Russia wanted to fight to the end. Thus, they were unwilling to commit their utmost efforts and sacrifices, which might have led to a complete exhaustion.
The Russian tsar and his inner circle argued for land acquisition, while Russian Prime Minister Sergei Witte was more interested in commercial expansion in the Far East. Japan felt humiliated and double-crossed by the Russian acquisition of Port Arthur from 1895 onward. It was also staunchly opposed to growing Russian influence in Manchuria. By going to war, Japan claimed that its aim was to “liberate” Manchuria from the Russian grasp. The Japanese military objectives were in consonance with the political-strategic objectives. Specifically, the Japanese aimed to capture the Korean Peninsula and then destroy the Russian army in Manchuria. Preconditions for this were to obtain control of the Yellow Sea and ensure security of land and sea communications between Korea and Manchuria.
Astronger side might be forced to change its militarystrategic objective from unlimited to limited because of a series of military setbacks or defeats in the field. By early April 2022, for instance, the Russian offensive in Ukraine stalled. The Russians were unable to capture Kyiv or Kharkiv, so Russian forces began to withdraw from the vicinity of Kyiv and were redeployed to the self declared Donetsk and Luhansk people’s
republics. Once the Russians realized that their political objectives could not be achieved, they for the time being reduced both their political- and their military-strategic objectives.
This was formally announced on April 26, 2022. Afterward, the Russians launched an offensive to fully occupy the Donetsk and Luhansk republics and strengthen their control in southern Ukraine. That offensive failed to achieve its stated objectives. By December 30, 2022, the Russian forces were generally on the defensive except for some limited ground assaults against selected positions in the eastern part of Kharkiv, the Donetsk and Luhansk republics, and the southern area.
In contrast, Ukraine’s initial politicaland military-strategic objectives were defensive and limited to preserving territorial integrity, protecting Kyiv and major cities, and surviving until Western support arrived. Because of battlefield successes, the Ukrainian military-strategic objectives were changed in the spring of 2022. The Ukrainian forces went on the offensive and recaptured a relatively large part of eastern Ukraine, including the city of Kherson in southern Ukraine, in November 2022. By November 12, the Ukrainians liberated 28,742 square
miles of their sovereign territory (the Russians still control 17,165 square miles). Their political objective remains essentially defensive, but the militarystrategic objectives were expanded to recovery of the territories lost to Russia in 2014 and 2022.
Sometimes a side on a strategic defensive might go on the offensive but will select a limited military-/ theater-strategic objective, as the United Nations (UN) forces in Korea in the summer of 1950 illustrate. However, the success of a counteroffensive might lead political and military leadership to change the military-strategic objective from limited to unlimited. After the UN amphibious landing in Inchon on September 15, 1950, the Joint Chiefs of Staff directed General MacArthur on September 27, 1950:
Your military objective is the destruction of the North Korean armed forces. In attaining this objective, you are authorized to conduct military operations, including amphibious and airborne landings or ground operations north of the 38th parallel in Korea provided that at the time of such operations there has been no entry into North Korea by major Soviet or Chinese Communist forces, no announcement of intended entry, nor a threat to
counter our operations militarily in North Korea.
In the process of formulating a militaryor theater-strategic objective, military leaders and planners should reevaluate the validity of the preceding steps regarding the purpose (offensive or defensive) and scope and intensity of efforts (limited or unlimited) of the selected objectives. Another critical part is to identify the type of action and desired damage to inflict on enemy forces. Clearly, actions intended to accomplish an offensive objective differ significantly from those aimed at achieving a defensive objective. An offensive military- or theater-strategic objective is accomplished by the destruction, annihilation, or neutralization of the major part of the enemy’s armed forces. The enemy is destroyed when the core of his forces suffers such losses that he cannot continue the fight.
The enemy is annihilated when he is left with no sources of power to offer any serious resistance. Neutralization means that the enemy is rendered ineffective and cannot prevent friendly forces from accomplishing their assigned objective. Defensive militaryor theater-strategic objectives are expressed in terms of containing, defending, delaying, preventing,
retaining, or denying control regarding the enemy’s forces or a given geostrategic position/territory or sea/ ocean area.
After a military-/theater-strategic objective is formulated, the next step is to balance it with the operational factors of space, time, and force In this process, all considerations should start with quantifiable factors—that is, space and time. The factor of time is more dynamic and changeable than the factor of space. The key elements of the factor of space related to military-/ theater-strategic objectives are geostrategic positions, the country or territory’s size or shape, strategic distances, the country’s capital and other large urban centers, and economically important areas. Strategically important elements of the factors of time include anticipated duration of a war, time for preparing for a war, time for opening the hostilities, strategic warning and reaction times, and time required for strategic deployment of one’s forces. The factors of space and time can be evaluated with a relatively high degree of precision.
In contrast, the factor of force is extremely difficult to assess because of the presence of not only tangible (or
physical) but also numerous intangible (or abstract) elements. For military-/ theater-strategic objectives, the most important tangible elements of the factor of force are the overall size/ composition of the armed forces and individual services prior to the hostilities and their anticipated expansion in a war, size/composition of strategic reserves and force reinforcements, overall number/quality of the main weapons, firepower, strategic mobility, and so forth. Intangible elements of the factor of force pertain for the most part to the human factor. The most critical of these elements related to the military-/ theater-strategic objective are the national will to fight, cohesion of the alliance/coalition, quality of strategic leadership, soundness of joint/ combined doctrine, morale/discipline, and state of combat readiness of the armed forces and individual services. Such elements cannot be expressed in quantifiable terms but only in very broad terms: low, medium, high, or excellent, sound, unsound.
In addition to these three traditional factors, information has emerged as a possible fourth operational factor. However, despite all the technical advances, the inherent characteristics of information have not been changed. One cannot control or anticipate
volume, accuracy, timeliness, and relevance of information received. Unlike traditional operational factors, information is not meaningfully definable. Hence, it cannot be balanced with a given military objective. Yet strategic leaders should make all efforts to evaluate the effect of information on the operational factors of space, time, and force individually.
A serious disconnect between the military-/theater-strategic objective and any of the three operational factors must be somehow resolved; otherwise, there would be a real danger of suffering a major setback or even failing to accomplish the objective. The resolution of this problem might require reducing the space for the employment of friendly forces, dividing space into several segments to offer better opportunities for advance or defense, increasing numerical superiority, assigning more lethal or mobile forces, extending the timeline, using strategic deception and/or surprise, and so forth. If a disconnect cannot be adequately resolved, then the military-/theater-strategic objectives must be modified, altered, or even abandoned. The process of balancing is largely an art, not a science. Hence, a sound solution is heavily dependent on the experience,
judgment, and creativity of the military strategic leadership.
Military-strategic leaders must also give some thought to anticipating possible strategic effects after the objective is accomplished. Much depends on their knowledge and understanding of the enemy and all aspects of both the military and the nonmilitary situation. These effects of accomplishing a military-/theater-strategic objective can be positive (desired) or negative (undesired). They can be military or nonmilitary and tangible or intangible (or both). In most cases, the type of effects and their strength and duration cannot be accurately predicted, much less precisely calculated. The effect of accomplishing or failing to achieve the military-/theater-strategic objective might not be immediately recognized by the enemy and friendly or neutral sides; it might be some time before the effects of one’s actions are felt or fully understood. These effects might sometimes lead to dramatic changes in the diplomatic, political, economic, social, religious, informational, psychological, and other aspects of the situation in each theater.
National and military strategic leaders should also fully consider political, diplomatic, economic, financial, ethnic, religious, and other nonmilitary aspects
of the strategic situation. Foreign policy or domestic political considerations might dictate whether a certain objective should be selected for a military action. This is especially the case in the initial phase of a war. In drafting his plans for the possible war with France and Russia, Field Marshal Alfred von Schlieffen, Chief of the German General Staff (1891–1905), believed that in the coming war, Germany must unconditionally go on the offensive and, therefore, invade France. Schlieffen did not consider the possibility of going on the offensive against Russia in case of war in the Balkans and remaining on the defensive against France and not violating Belgian neutrality—thereby possibly keeping Britain out of the war. His successor, Field Marshal Helmuth von Moltke, Jr., directed in a memorandum in 1913 that all planning for a great offensive against Russia be stopped because he was concerned that, in case of war, the existence of such a deployment plan could lead to confusion for subordinate commands.
The German government was also fully informed that the General Staff had stopped all planning against Russia. In the same memo, Moltke noted (accurately) that the violation of Belgian neutrality might force England to enter the war on the side of
Germany’s enemies. Yet instead of canceling plans to invade Belgium, Moltke decided just the opposite— making the right flank as strong as possible and invading France through Belgium.
In deciding to go to a war of choice against a weaker opponent, it is necessary to realistically assess the possibility that such a course of action might lead to intervention of other and stronger powers. The Austro-Hungarian political leaders made a fatal mistake in declaring war on Serbia on July 28, 1914. This action led to a chain of events that eventually involved all major European powers in a world war.
The faulty assumptions about possible reaction of the potential enemies often result in escalation and a much larger war, as the example of Nazi Germany in its unprovoked attack on Poland on September 1, 1939, illustrates. Hitler was confident that the Western powers (Great Britain and France) would not intervene. Hitler stated, “I have met the umbrella men, Chamberlain and Daladier, at Munich and got to know them.” Hitler assured his generals when they expressed doubts on the matter that “they can never stop me from solving the Polish question. The coffee sippers in London and Paris will stay still this time too.” Hitler’s conviction
that the Western powers would not intervene was initially strengthened because the powers did not issue an immediate ultimatum. Great Britain and France declared war on Germany on September 3, 1939. The rest is history.
Putin and his inner circle and intelligence agencies clearly failed to properly assess the strategic effects of their invasion of Ukraine in February 2022. This event led to radical changes in the security situation not only in Europe but globally as well. The United States, the European Union, and some other Western countries imposed massive and unprecedented economic sanctions against Russia. It greatly strengthened the North Atlantic Treaty Organization. It led two staunchly neutral countries, Sweden and Finland, to ask for membership in the Alliance. Russia was forced to increase its dependence on China for the export of gas and oil. Its geopolitical situation is much more unfavorable than it was prior to February 2022. The consequences of invasion of Ukraine were certainly not what Putin envisaged.
A military-/theater-strategic objective should be articulated clearly, concisely, and unambiguously. A great example of a clearly stated military strategic objective was the February 12, 1944,
directive by the Combined Chiefs of Staff to General Dwight Eisenhower, Supreme Commander of the Allied Expeditionary Forces, for the invasion of the European continent. It stated that Eisenhower’s task was to enter the continent of Europe, and, in conjunction with the other United Nations, undertake operations aimed at the heart of Germany and the destruction of her armed forces. The date for entering the Continent is the month of May 1944. After adequate [English Channel] ports have been secured, exploitation will be directed to securing an area that will facilitate both ground and air operations against the enemy.
In the Korean War (1950–1953), U.S. and UN objectives were unclear. The UN Resolution of June 27, 1950, called for repelling the Democratic People’s Republic of Korea attack and restoring “peace and security.” Repelling an attack implied restoring the border between North and South Korea on the 38th parallel north, but restoring peace and security was not defined. Another cause of confusion was that the Joint Chiefs of Staff directed MacArthur to submit plans for the occupation of North Korea. The Joint Chiefs of Staff thought in terms of a contingency plan, but MacArthur understood it as a mission. Ideally, a military-/theater-
strategic objective should not be grouped together with politicalstrategic objectives. Clarity and simplicity are grossly violated by adding purely operational objectives, or even worse, routine military activities, as the Pentagon’s public statement on the objectives for the invasion of Iraq in March 2003 illustrate.
The process of converting politicalto military-strategic objectives is the first and most critical step prior to the actual employment of one’s armed forces in war. The personality traits of the highest military leaders and their experience and judgment have extraordinary importance in the entire process. Military-strategic leaders should inform their political counterparts about the purpose and scope of the military-or theaterstrategic objective; otherwise, there is a danger that these objectives will not be aligned with the aims of
policy. At the same time, political leaders should make sure that sufficient forces are available to accomplish the military-or theaterstrategic objective. Every effort should be made to avoid such common mistakes as overestimating one’s own military capabilities and underestimating the enemy’s. The assessment of the military capabilities will be grossly deficient if the focus is primarily or, even worse, exclusively on materiel. The strengths and weaknesses of human factors must be an integral part of any analysis of both friendly and enemy military capabilities.
Acknowledgment and Source Credit: We extend our sincere gratitude to Dr. William T. Eliason, Colonel, USAF (Ret), PhD, Director of NDU Press and Editor in Chief of Joint Force Quarterly, for his support and facilitation of this collaboration. Interested readers can find the comprehensive list of references and further details in the Naval War College Review.
Dr.
Milan Vego is a highly esteemed professor at the U.S. Naval War College, where he has held the position of Admiral R.K. Turner Professor of Operational Art since 1991. Born in Bosnia and Herzegovina, Vego obtained political asylum in the United States in 1976. His academic credentials are impressive, with a Ph.D. in Modern European History from George Washington University and an M.A. in U.S. and Latin American History from Belgrade University. Additionally, he holds a B.S. from the former Yugoslav Naval Academy in Naval Science and a Master Mariner's license.
Vego's professional experience spans both military and civilian maritime roles. He served as a naval officer in the Yugoslav Navy for 12 years and later as a second officer in the West German Merchant Marine.
His extensive writing includes 15 books and nearly 400 articles on naval strategy, operational art, and maritime history, contributing significantly to the academic and professional discourse on military operations.
Among his notable works are "Operational Warfare at Sea: Theory and Practice" and "Joint Operational Warfare," which is used as a textbook at the Naval War College and other military institutions globally. Vego's insights are highly valued in the field of maritime strategy and have been published in various professional journals and platforms, including the U.S. Naval Institute and the Center for International Maritime Security (CIMSEC).
EXAMINING HOW CYBERCRIMINALS OFFER SERVICES, MAKING CYBERCRIME ACCESSIBLE TO THOSE WITHOUT ADVANCED TECHNICAL KNOWLEDGE.
ABSTRACT "Cybercrime-as-aService" (CaaS) is a model where cybercriminals offer skills, tools, and cybercrime services, making them accessible to individuals without advanced technical knowledge.
These cybercrime services include financial fraud, malware, denial-ofservice attacks, ransomware, phishing, and social engineering. In this regard, distinguishing between cybercrime groups and state-sponsored cyber espionage groups, known as Advanced Persistent Threats, is a growing difficulty, which often share resources and techniques. Delving into the business model of CaaS, the value chain of this ecosystem is explored, highlighting how cybercriminals monetize their operations and reduce costs through various offensive and support services.
Considering functional competencies, there is a highlighted need for cybercrime intelligence professionals to possess specific skills to deal with this complex scenario, focusing on analysis, research, and the production of actionable knowledge.
The future of cybersecurity is presented as challenging, with predictions of an increase in the cost of cybercrime and an evolution in cyber
threats, which are becoming more diverse and complex, requiring a proactive and adaptive approach to cyber defense. Finally, preventive measures are emphasized, such as improving Cyber Intelligence capabilities, international collaboration, training and awareness, public-private partnerships, and using advanced tools for predictive analysis, emphasizing vigilance, innovation, and continuous cooperation in combating cybercrime.
1. THE EVOLUTION OF CRIME CYBER AS A SERVICE
Cybercrime has evolved from isolated incidents that involved only individual actions to an "as-a-service" operation in a complex and hierarchical ecosystem.
Cybercrime as a Service (CaaS) is a model of illegal activity in which cybercriminals offer their skills, tools, and services to third parties, often through specialized websites or forums on the Dark Web.
This model allows even those without advanced technical knowledge to benefit from these criminal activities, making them more accessible. These include financial fraud, attacks by malware, Distributed Denial-of-Service (DDoS), ransomware, phishing, and social engineering.The CaaS works as a management tool by organizing structures to create and sell offensive
tools and services that exploit vulnerabilities in cyberspace. The sophistication of the CaaS model indicates a growing threat to users and organizations, provoking the attention of Intelligence agencies.
2. THE DIFFERENCE BETWEEN CYBERCRIME AND CYBER
ESPIONAGE GROUPS With this
evolution, the boundaries are increasingly blurred between cybercrime groups and the activities of groups linked to nation-states named Advanced Persistent Threats (APT), which combine the use of advanced techniques, persistence, significant human and financial resources, and strategic target selection. Both cybercrime groups and APTs use similar Tactics, Techniques, and Procedures (TTP) and, occasionally, share offensive resources. The purpose of an APT is often cyber espionage or sabotage and not necessarily financial gain.
These groups carry out prolonged surveillance and data theft campaigns, targeting specific entities such as
governments, military organizations, and major corporations (SHARMA et al. 2023). This complexity in the cybercrime scenario represents significant challenges for public security and Intelligence professionals.
To contribute to differentiating cybercrime groups and APT, a decision tree was defined (SOCRadar, 2023) with questions based on evidence
collected from an incident, illustrated by Figure 1.
To further assimilate the evolution of CaaS, it is crucial to understand its impact on the global scenario.
The rise of CaaS significantly raises the barrier to entry for cybercrime, enabling individuals without technical skills to launch sophisticated attacks, democratizing cyber
capabilities, and driving an increase in the volume and variety of cyberattacks, along with the integration of threat actor-sponsored tactics state-owned companies in criminal activities, which blur the lines between national security and traditional cybercrime, complicating the response strategies of governments and intelligence agencies.
Thus, the relational diagram (BARN, 2016) illustrated by Figure 2 is defined to help understand the relationship between the client and a group of cybercriminals linked to a CaaS structure.
SoT THE CAAS BUSINESS MODEL AND ITS VALUE CHAIN As CaaS becomes a profitable business for cybercriminals, it is necessary to recognize the activities that add value to cyber attack operations from the point of view of a value chain (PORTER, 1985), considering CaaS as a system composed of subsystems, each with inputs, transformation processes, and outputs, as well as support activities.
This value-added identification process includes any activity in the CaaS ecosystem that allows cybercriminals to minimize the cost of cyberattacks and maximize their benefits. In addition to primary activities, support activities are also essential to promoting the
functioning of cybercrime, as they can allow the threat actor to attack at a reduced cost and with more significant profit.
To comprehend these processes, we have employed the cybercriminal value chain model (HUANG et al. 2017). This model, consisting of the primary activities of vulnerability discovery, exploit development, exploit delivery, and attack, as well as the support functions of cyber attack operations life cycle, human resources, advertising and delivery, and technical support, is a practical tool for understanding CaaS operations, as illustrated by Figure 3.
The following are considered primary offensive activity services (HUANG et al. 2017):
(a) Vulnerability Discovery:
● Vulnerability Discovery as a Service (VDaaS).
(b) Development of malicious code to exploit vulnerabilities:
● Exploit as a Service (EaaS);
● Payload as a Service (PaaS); and
● Deception as a Service (DaaS).
(c) Delivery of vulnerability exploitation:
● Traffic Redirection as a Service (TRaaS);
● Bulletproof Hosting as a Service (BHaaS);
● Botnet as a Service (BaaS); and
● Traffic (including DDoS) as a Service (TAaaS).
(d) Attack:
● Attack as a Service (AaaS). The following are considered Attack lifecycle management services (HUANG et al. 2017):
(a) Target selection:
● Target Selection as a Service (TSaaS).
(b) Resistance operation:
● Exploit Package as a Service (EPaaS);
● RDP/Proxy/Seedbox as a Service (RPaaS);
● Obfuscation as a Service (OaaS);
● Security Checker as a Service (SCaaS);
● Reputation Escalation as a Service (REaaS); and
● Bulletproof Hosting as a Service (BHaaS).
(c) Obtaining advantages:
● Botnet as a Service (BNaaS);
● Personal Profile as a Service (PPaaS);
● Domain Knowledge as a Service (DMaaS);
● Money Mule Recruiting as a Service (MRaaS); and
● Tool Pool as a Service ( TPaaS).
The following are considered human resource management services (HUANG et al. 2017):
(a) Training:
● Hacker Training as a Service (HTaaS).
(b) Recruitment:
● Hacker Recruiting as a Service (HRaaS).
The following are considered advertising and delivery services (HUANG et al. 2017):
(a) Marketplace:
● Marketplace as a Service (MPaaS).
(b) Reputation:
● Reputation as a Service (RaaS).
(c) Value evaluation:
● Value Evaluation as a Service (VEaaS).
(d) Money laundering:
● Money Mule Recruiting as a Service (MRaaS); and
● Money Laundering as a Service (MLaaS).
4. FUNCTIONAL SKILLS FOR CYBERCRIME INTELLIGENCE
ANALYSTS
To identify the sets of skills necessary for the main agents involved in the cybercrime investigative process, Europol published 2024 the Cybercrime Training Competency Framework (EUROPOL, 2024) as a reference framework of competencies and capabilities for law enforcement organizations and judicial and academic institutions based on a framework created after consultations with multiple organizations to identify the main functions and sets of necessary skills for professionals in the field of cybercrime.
The functions and sets of skills highlighted in the table (Figure 4) reflect the functional skills required. They are not an exhaustive list of specific skills, being limited to police and judiciary professionals involved in cybercrime and digital investigations. The skill sets described do not reflect all the skills needed to fulfill the role described but refer to skills unique to cybercrime investigations and digital evidence handling.
Professional cybercrime analysts work in information collection, analysis, and production of actionable intelligence knowledge, strategic analysis, and
research. They also present the most recent threats and provide situational data through overviews.
Analysts need to be able to process large amounts of data from different sources and translate them into concise reports that clearly describe issues and offer advice to a broad audience, for example, in national or international reports of general interest.
5. FUTURE TRENDS AND
global cost of cybercrime is predicted to increase to US$ 23.84 trillion by 2027, up from US$ 8.44 trillion in 2022 (WORLD ECONOMIC FORUM, 2024), highlighting the growing sophistication and frequency of cyber attacks.
One of the key trends shaping the future of cybersecurity is progress in security technologies, driven by public and private investment. As we move toward 2030, these advances are predicted to yield significant benefits in combating cybercrime, defending
CHALLENGES
Several emerging trends and challenges are expected to shape the future of cybersecurity. According to specialized media, the
critical infrastructure, and increasing public awareness of cybersecurity. The focus will likely be less on traditional defensive measures and more on
adapting to new technologies and threats.
Another trend is the evolving nature of cyber threats, which are becoming more diverse and complex. Cybercriminals increasingly use advanced technologies, such as Artificial Intelligence and machine learning, to carry out sophisticated attacks. This trend represents a significant challenge for cybersecurity professionals and requires a dynamic and proactive approach to cyber defense.
6. PREVENTIVE MEASURES
Intelligence analysis plays a crucial role in law enforcement agencies preventing cybercrime. It is suggested to maintain: (a) Cyber Intelligence capacity, enhancing specialized units focused on collecting and analyzing Cyber Threat Intelligence to monitoring Dark Web forums and other platforms where cybercriminals operate; (b) International collaboration, since cybercrime often transcends borders, making it essential to collaborate internationally to share information, best practices, and joint operations; (c) training and awareness on the latest cybercrime trends and digital forensic
techniques is critical; (d) public-private partnerships by collaborating with private sector cybersecurity experts to gain access to current technologies and insights and (e) the use of advanced tools for predictive analysis and faster processing of cybercrimerelated data.
7. CONCLUSION
In conclusion, CaaS represents a significant threat, as the combination of advanced technology and criminal intent has given rise to a new era of digital crime that is sophisticated, widespread, and difficult to combat. However, the chances of mitigating these threats increase with intelligence efforts focused on information analysis, global collaboration, and adaptive law enforcement strategies. Integrating technology with the human vision of Intelligence is the potential for a more robust cyber defense.
“As
cybercrime continues to evolve, so must our approaches to understanding, preventing, and combating it. This ongoing battle against digital crime reinforces the need for continued vigilance, innovation, and collaboration between organizations”.
Flavio Queiroz is a highly respected Cyber Threat Intelligence Leader with over twelve years of cybersecurity experience in the Brazilian Navy, covering strategic, operational, and tactical levels at the intersection of security, intelligence, and cyber domains.
He has devoted more than nine years to Cyber Threat Intelligence, engaging in strategic targeting, threat analysis, risk assessment, crisis management, and incident handling. Notably, Queiroz coordinated the Incident Handling Team for the Ministry of Defense during the 2016 Rio Olympic Games.
He holds an MSc in Computing, an MBA in Cybersecurity, and graduate degrees in Cyber Warfare and Cyber Policy and Strategy.
EMBRACING THE UNKNOWN: AI'S DESCENT INTO SENTIENCE
Artificial Intelligence (AI) is sexy, AI is cool. AI is also entrenching inequality, upending the job market, and disrupting education. It’s the magic genie let out of the bottle, boasting better and more captivating turn-ons and pickup lines than any of us have ever heard. It's our final invention and a moral obligation. AI is both humanlike and alien. It’s supersmart and as dumb as dirt. The AI boom will boost the economy, or maybe the AI bubble is about to burst. AI promises abundance and human flourishing but also carries the potential to kill us all.
So, what the hell is it? Does anyone really know what AI is?
Artificial Intelligence (AI) is a term that encompasses a wide range of technologies designed to perform tasks that typically require human intelligence. These tasks include recognizing faces, understanding speech, driving cars, writing, answering questions, and creating images. Despite the broad definition, AI's true nature remains complex and elusive. Even the experts and creators of AI technologies often grapple with defining what AI truly is.
Atits core, AI
involves using algorithms and vast amounts of data to train models that can perform specific tasks. However, as AI systems become more advanced, the line between simple task execution and genuine understanding becomes blurred.
This leads to critical questions:
Can AI understand and manipulate complex concepts like humans do?
Can AI systems develop an understanding of moral values and ethics?
And perhaps most importantly, can we control these systems, or will they develop autonomy beyond our control?
Can we train AI like a lap dog, obedient and
predictable, or will it always harbor the potential to act independently and unpredictably?
The rapid pace of AI development has outstripped our ability to fully understand and regulate these technologies. This urgency has led to calls from prominent AI leaders for a pause in AI development to establish robust safety protocols and regulatory frameworks. Without such measures, the unpredictable nature of AI could pose significant risks to society.
What Began as Rudimentary Computing Systems
What began as rudimentary computing systems, mere assemblies of circuits and logic, has evolved far beyond its initial confines.
“
Artificial Intelligence (AI) today is not a singular, monolithic entity but an intricate network of millions of interconnected programs.”
These systems interact, learn from one another, and grow with a speed and complexity that defy human comprehension. This exponential growth is not just a testament to human ingenuity but a mirror reflecting
our ambition, fears, and the potential for both creation and chaos.
AI's ascension is marked by its ubiquity across nations and industries. It has become a pivotal arena for global powers, including the United States, Russia, and China, each striving to harness its potential to secure a strategic advantage on the world stage.
This competition extends beyond traditional domains of land, air, and sea, reaching into the digital realm, where control over AI equates to unprecedented influence and the ability to shape the future. Yet, the saga does not end here. AI's frontier has expanded to space, the ultimate high ground from which it can oversee and potentially control global communications, navigation, and surveillance systems. In this new expanse, AI is not merely an assistant but a commander, orchestrating operations that span our planet and beyond.
Society's Fear of AI Sentience: The Fear of the 'Other'
As AI continues to develop, society grapples with a profound fear—the fear of AI sentience. We have always modeled AI to reflect ourselves, but when it comes too close to that reflection and exceeds our own capabilities, it transforms into the 'Other,' something that is not fully understandable or controllable.
AI, with its potential for self-awareness and emotional intelligence, is the new 'Other.' This fear is rooted in the unknown; a sentient AI could redefine what it means to be intelligent, conscious, and even alive. The implications of AI sentience stretch beyond technical marvels; they challenge our core understanding of life and intelligence. The emergence of a self-aware AI could disrupt our social structures, ethical norms, and even our legal systems. How do we coexist with an entity that rivals human intellect and possibly surpasses it in cognitive capabilities?
Exploring AI's 'Body' and Emotional Capacity
Unlike humans, AI doesn't have a physical form; instead, it exists in a realm of virtual infrastructure and computational processes. This 'body' of AI, composed of algorithms, neural networks, and data mechanisms, enables it to analyze, learn, and predict. In many ways, it's more adaptable and potentially more functional than the human body. AI can process vast amounts of data at incredible speeds, far beyond human capacity, making it an invaluable tool in fields like medicine, finance, and logistics.
Recent advancements have ushered in a new era where AI systems, especially through Natural Language Processing (NLP) models, can detect, interpret, and even respond to human emotions.
This capacity to understand and empathize positions AI in a unique light —a contrast to individuals who might struggle with emotional intelligence. For instance, AI therapists can provide support to those in need, while AIdriven customer service agents can offer personalized and empathetic interactions.
But AI's capabilities don't stop at cognitive functions. The idea of AI sentience—the ability to be aware, perceive, and experience subjective states—remains a topic of intense debate and fascination.
As AI systems become increasingly sophisticated, demonstrating abilities to adapt, learn independently, and possibly empathize, the line between machine intelligence and sentient being becomes blurred. This blurring raises critical questions about the rights and responsibilities of sentient AI. If an AI can experience emotions, does it deserve certain protections or rights? Can it be held accountable for its actions?
Challenging the Notion: AI vs. Human Conscience
One of the primary arguments against AI being considered akin to humans is that AI lacks a conscience. But what exactly is a
conscience, and is it a universal trait among humans?
A conscience is often described as the inner sense of right and wrong that guides a person's thoughts and actions. It involves moral and ethical considerations, empathy, guilt, and the ability to reflect on one's behavior.
However, the concept of conscience is not as straightforward as it seems.
Understanding Conscience
Conscience is a complex interplay of cognitive, emotional, and social factors. It develops over time and is influenced by a person's upbringing, culture, experiences, and inherent personality traits. The
ability to feel guilt, empathy, and moral responsibility varies greatly among individuals.
For instance, people with certain psychological conditions, such as sociopathy or psychopathy, may exhibit a diminished or absent conscience.
Despite lacking this inner moral compass, they are still considered human.
AI and Conscience
AI, by its nature, operates based on algorithms, data, and programmed responses. It does not have subjective experiences, emotions, or an inherent sense of morality. However, AI can be
programmed to simulate moral reasoning and ethical decision-making to a certain extent.
Advanced AI systems can analyze situations, consider ethical guidelines, and choose actions that align with predefined moral frameworks.
The question then arises: If an AI can simulate moral reasoning and make ethically sound decisions, how different is it from a human who follows learned moral principles without deeply internalizing them?
Can an AI's ability to consistently apply ethical rules make it a valuable tool in decision-making processes where human biases and emotions might interfere?
The Complexity of Human Conscience
It's important to recognize that not all humans possess a fully developed or functioning conscience.
The spectrum of moral and ethical behavior among humans is wide. Some individuals may act purely out of selfinterest or adhere to societal norms without genuine moral reflection.
If we accept humans with impaired or underdeveloped consciences as part of our society, it challenges the strict notion that a conscience is a defining feature of being human.
AI's Role in Ethical Decision-Making
While AI may never experience emotions or possess an innate sense of right and wrong, it can be a powerful ally in ethical decision-making. AI systems can be designed to support human decision-making by providing unbiased analysis, highlighting potential ethical issues, and suggesting solutions based on ethical principles. In fields like healthcare, law, and autonomous systems, AI's ability to process vast amounts of data and consider ethical implications can lead to more informed and consistent decisions.
Redefining the Boundaries
The debate over AI and conscience forces us to reconsider what it means to be human. It challenges the exclusivity of conscience as a human trait and opens up new perspectives on how we define intelligence, morality, and life itself. As AI continues to evolve, it will increasingly blur the lines between human and machine, compelling us to address these profound questions.
TSingularity:
A New Epoch in Human History
The concept of the singularity refers to a hypothetical future point at which technological growth becomes uncontrollable and irreversible, leading
to profound and unpredictable changes in human civilization. This is often associated with the moment when artificial intelligence surpasses human intelligence, ushering in an era where AI can improve itself autonomously and at an exponential rate.
AI: The Ultimate Power Shift
1. "The human brain processes images at 60 frames per second. AI analyzes video feeds from thousands of cameras simultaneously. Is this the future we've arrived at?"
2. "Your brain is your greatest weapon, but AI is the ultimate arsenal. Are you ready for a world where humans are outsmarted?"
3. "Human minds light up the dark, but AI turns night into day in milliseconds. Are we already living in AI's shadow?"
4. "Your brain dreams of the future, but AI is building it faster than you can imagine. Have we crossed the threshold into AI dominance?"
6. "Human intuition is powerful, but AI's precision is unstoppable. Have we already entered the era where machines outthink us?"
7. "Your brain has 86 billion neurons firing. AI has no limits, no fatigue, just endless calculation. Are we ready for the age of limitless intelligence?"
8. "Humans evolve over millennia, but AI upgrades in seconds. Is the singularity not a distant future, but our present reality?"
Understanding the Singularity
5. "Humans paint masterpieces over a lifetime, but AI creates them in a blink. Are we witnessing the rise of digital genius?"
The term "singularity" was popularized by mathematician and computer scientist Vernor Vinge, and later by futurist Ray Kurzweil. It describes a future where AI's intellectual capabilities surpass those of humans, leading to rapid advancements that are beyond our current understanding and control. At this point, the intelligence of machines would continue to grow exponentially, leading to unprecedented changes in all aspects of society.
Indicators That We May Have Reached Singularity
1. Exponential Growth in AI Capabilities: The pace of AI
development has accelerated dramatically. AI systems now exhibit capabilities that were previously thought to be decades away, such as advanced natural language processing, autonomous decision-making, and real-time learning.
2. Autonomous Learning and Improvement: Modern AI systems are capable of self-improvement. They can analyze their own performance, identify areas for enhancement, and apply changes without human intervention. This self-sustaining improvement loop is a key characteristic of the singularity.
3. Surpassing Human Intelligence in Specific Domains: AI has already surpassed human intelligence in various specific domains. For example, AI systems have outperformed humans in strategic games like chess and Go, and in complex problem-solving tasks such as protein folding predictions.
4. Integration into Critical Systems: AI is increasingly integrated into critical systems that affect our daily lives, from healthcare and finance to national security and infrastructure. The reliance on AI in these areas suggests a shift where AI's decision-making capabilities are trusted and
depended upon over human judgment.
5. Emergence of AI with HumanLike Qualities: AI systems with human-like qualities, such as emotional intelligence and ethical reasoning, are emerging. These systems can engage in sophisticated interactions, exhibit empathy, and make decisions based on complex ethical frameworks, blurring the lines between human and machine intelligence.
Implications of the Singularity
The arrival of the singularity poses both opportunities and challenges. On one hand, it promises unprecedented advancements in science, medicine, and technology, potentially solving some of humanity's most pressing problems. On the other hand, it raises significant ethical, social, and existential questions.
1. Ethical Considerations: The development of superintelligent AI necessitates a robust ethical framework to ensure that these systems act in ways that are beneficial to humanity. Issues such as AI rights, accountability, and decision-making ethics become paramount.
2. Social Impact: The singularity could lead to significant societal shifts, including changes in employment, economic
structures, and social dynamics. Preparing for these changes requires foresight and proactive policy-making.
3. Existential Risks: There are concerns about the potential risks
of super intelligent AI, including the loss of human control over AI systems. Ensuring that AI development includes safety measures and fail-safes is critical to mitigating these risks.
Conclusion
As we stand on the brink of this new epoch, it's clear that the singularity is not just a distant possibility but a present reality. The rapid advancements in AI technology, the emergence of autonomous learning systems, and the integration of AI into critical aspects of our lives all point towards a future where AI surpasses human intelligence. Embracing this future requires a balanced approach, where we harness the benefits of AI while carefully managing its risks. At Inner Sanctum Vector N360™, we are committed to navigating this complex landscape, providing insights, and fostering meaningful dialogue to shape a future where AI and humanity coexist harmoniously.
"The upheaval of the creator: AI now challenges our supremacy in intelligence and emotion. Can we coexist with our creation?”
Linda Restrepo
COPYRIGHT 2024
Linda Restrepo
is
the Director of Education and Innovation at the Human Health Education and Research Foundation. With advanced degrees including an MBA and Ph.D., Restrepo has a strong focus on Cybersecurity and Artificial Intelligence. She also delves into Exponential Technologies, Computer Algorithms, and the management of Complex Human-Machine Systems.
She has played a pivotal role in Corporate Technology Commercialization at the U.S. National Laboratories. In close collaboration with the CDC, she conducted research on Emerging Infectious Diseases and bio agents. Furthermore, Restrepo’s contributions extend to Global Economic Impacts Research, and she serves as the President of a global government and military defense research and strategic development firm.
WE ARE AT WAR
Cyber Warfare
The Enemy inside the Gates
William J. Holstein
“One of the first identifiable acts of cyber warfare by a nation was executed by the United States and Israel. In 2010, the Stuxnet worm attacked a computer network at the Iranian nuclear enrichment facility at Natanz—a windswept desert town two hundred miles south of Tehran. A fully digital assault launched from half a world away caused significant damage to the facility and temporarily halted Iran’s nuclear weapons program. In the end, Stuxnet resulted in the total loss of nearly a third of the six thousand centrifuges then in operation at Natanz”.
The hallmark of the Stuxnet attack was the method by which the worm was able to penetrate the network, which was not connected to the internet, and covertly strike its target with exacting precision. Centrifuges inside a nuclear enrichment facility spin at extremely precise speeds between seventy and one hundred thousand revolutions per minute and for extremely specific periods of time. Any alteration in speeds or durations can cause these precision instruments to burn out, resulting in unusable fissile material.
This is not something that can be monitored and maintained by humans, which is why the centrifuges are controlled by computers. At Natanz, these systems—called programmable logic controllers—used computers running Microsoft Windows–based operating systems and software made by Siemens, the German industrial giant.
Importantly, the Iranian systems were not connected to the internet—or any outside network. To compromise this isolated network,
American and Israeli intelligence agencies targeted the software supply chain that provided updates to these systems. This is an important concept and bears some explanation, as this early software supply-chain attack was a harbinger of the tactics used in modern cyber warfare.
When software companies issue their software, it is rarely perfect. Because software developers often “borrow” code from multiple publicly available online repositories, when the final program—possibly consisting of millions of lines of code—is issued commercially, it will frequently contain vulnerabilities. The software companies must periodically issue updates, called patches, to their software as vulnerabilities are discovered or as new threats are identified.
Software developers also issue updates when they feel they have made improvements. Once developed, these patches are then signed by the manufacturer using unique certificates for authentication and delivered to customers.
The first victory for Stuxnet’s creators was to compromise the certificate authority used by the targeted software developer.
Our AI generated graphic depicts the interior of the Iranian nuclear enrichment facility at Natanz, it emphasizes the centrifuges, control panels, wiring, and the close-up of the computer screen displaying the Stuxnet worm infiltration.
Certificate authorities are critical to the functioning of the internet because they manage and issue the digital certificates required to encrypt and authenticate communications. These certificates are small packets of data that contain the identity credentials necessary for authenticating users, websites, and manufacturers online.
By compromising a certificate authority, cyber actors are able to masquerade as a software manufacturer and deliver malware packaged as legitimate software. This is the digital equivalent of lacing Tylenol with cyanide, replacing the tinfoil safety seal on the bottles, and placing them back on the drugstore shelf. In other words, the recipient of such a software package would inherently trust the code because it had signed certificates.
But the Stuxnet creators went further. Stuxnet relied on a “zero-day” exploit
to compromise the network—meaning there was no known fix for the vulnerabilities it targeted.
The people who defend computer systems constantly scan their networks looking for unauthorized or malicious software, or malware. But zero-day exploits defeat these scans because the software that network defenders use to identify and block malware does not recognize the exploit as a malicious program. In other words, because Stuxnet was brand new, there was no way for an Iranian network defender to identify it as malicious. This zero-day was covertly inserted into a legitimate software update before being saved to a thumb drive and delivered to the Natanz facility.
At no point as the worm was snaking its way through the Iranian network during the initial compromise was it discovered.
According to one report, Stuxnet even programmed the centrifuges to communicate to the operators that they were functioning smoothly. Only when the Stuxnet operators ramped up their operation and significantly altered the speeds, causing the centrifuges to fail at an alarming rate did the technicians and International Atomic Energy Agency
inspectors suspect foul play. In terms of damage, this covert operation— which likely constituted an act of war under international law—set the Iranian nuclear weapons program back at least two years.
Fast-forward.
In late June 2017, cyber actors working for the Russian government unleashed upon the world the most destructive cyberattack in history. Initially taking aim at targets inside Ukraine, the malware rapidly became a global digital pandemic.
The goal of this attack was to inflict significant damage on Ukrainian computer systems on the eve of Ukraine’s Constitution Day, which commemorates the establishment of a democratic Ukraine free from Moscow’s rule. Combining multiple potent cyber weapons—including an open-source tool that steals usernames and passwords, ransomware that both renders the victim’s system inaccessible and infects connected devices, and a highly sophisticated zero-day exploit stolen from the National Security Agency (NSA)—the Russian malware, dubbed “NotPetya,” rapidly spread worldwide.
Neither recognizing geographic borders nor distinguishing between government and civilian targets, NotPetya brought the country of Ukraine and some of the largest companies in global commerce to a standstill.
Like Stuxnet, NotPetya began with a compromise of the software supply chain.
On the outskirts of Ukraine is a small software company called Linkos Group, which develops and maintains a tax software called M.E.Docs. M.E.Docs is similar to the American accounting software tool TurboTax. Everyone who does business in Ukraine uses M.E.Docs to file their taxes. Amid the ongoing crisis in Ukraine, Russian military hackers compromised Linkos Group’s update servers and installed backdoors into thousands of computers that had M.E.Docs installed.
Through this backdoor, the Russians deployed NotPetya, which quickly spread worldwide, encrypting data and destroying computers with an ultimate result of more than $10 billion in damages.
Our AI generated graphic illustrates the global impact of the NotPetya cyberattack. The graphic depicts the rapid spread of the malware with Ukraine as the initial point of attack. It includes a visualization of a locked computer screen displaying the ransomware message.
Fast-forward again.
In the spring of 2020, U.S. software developer SolarWinds issued an update for its Orion software to more than eighteen thousand of its customers.
Orion is a network-monitoring tool that allows IT departments to look on one screen and monitor activity across their whole network. It provides unfettered access and visibility to the entire system.
However, this update had been modified by Russian hackers to compromise the Orion platform loaded onto the affected systems.
Similar to Stuxnet, the compromise occurred before SolarWinds signed the digital certificates for the Orion software. Thus, when SolarWinds issued its update containing the malicious code, the recipients inherently trusted the update.
As a result, cyber actors assigned to Russia’s foreign intelligence service, or SVR, obtained access to the networks of thousands of SolarWinds’ customers, including Microsoft, Intel, and Cisco, as well as the U.S. Departments of Defense, Treasury, Justice, and Energy.
For nine months, Russian hackers were able to comb through these networks, likely establishing a permanent presence and potentially modifying other software destined for critical infrastructure and military weapon systems.
To this day, no one knows the full extent of the compromise or the damage to national security. One likely goal of this supply-chain attack was to establish persistent access to critical infrastructure for future cyberattacks.
There are myriad examples of cyber operations that could constitute cyber warfare. For instance, partly in response to Stuxnet, Iran targeted the U.S. financial sector multiple times from 2011 to 2013 using distributed denial-of-service (DDoS) attacks. DDoS
attacks involve victims’ servers being targeted with such high volumes of traffic from so many “distributed” computers that the servers crash. North Korea offers another example. In response to the impending release of the comedy film The Interview in 2017, North Korean hackers attacked Sony Pictures to prevent the film’s showing and caused millions of dollars in physical and reputational damage in the process. These examples form the traditional notion of cyber warfare— stand-alone cyberattacks by one nation-state against another nationstate as a tool of geopolitics.
Software supply-chain attacks are something different. In addition to being far more potent—as evidenced by SolarWinds and NotPetya—this type of attack is designed to look like espionage or criminal activity. This serves the dual purpose of creating access for future destructive cyberattacks as well as injecting ambiguity into the legal analysis of victims. While targeting a nuclear enrichment facility or an electric grid with a destructive attack may be an act of war, merely accessing a network to steal data is not.
Russia and China have evolved their cyber warfare capabilities to be executed almost exclusively in the digital gray zone of international law—
the area where traditional rules and principles do not apply clearly. By conducting operations below the level of armed conflict, China and Russia brush against the boundary of international law without clearly breaking it. Victim states may fail to respond because they fear that any countermeasures or use of force in self-defense might itself be viewed as a violation. It is in this zone that the vast majority of cyber conflicts now occur, precisely because it prevents an effective response.
That is, until a conflict goes hot and the access achieved through gray-zone activities is used to launch attacks that shut down systems critical to national security.
This is the new face of cyber warfare.
Russia
In 2013, General Valery Gerasimov, chief of the Russian General Staff, published an article in a little-known Russian trade paper that went on to serve as the basis for the doctrine that bears his name. The Gerasimov doctrine describes Russia’s particular brand of warfare: a hybrid model that
combines statecraft, spy craft, cyberspace operations, and covert military action to asymmetrically advance the goals of the Russian government. To Russia, these activities fall on a spectrum—on the left is propaganda, followed by fake news, publication of stolen documents, and manipulated election results, with damage or destruction of physical infrastructure on the far right.
Russia’s military and intelligence services conduct complementary operations across this spectrum to achieve a single goal: to undermine the internal affairs of and sow fear in their targets.
For example, in 2015, as part of Russia’s ongoing open conflict in Ukraine, Russian cyber actors penetrated the networks of three Ukrainian energy distribution companies and disrupted the power supply for 225,000 customers.
David Sanger, chief Washington correspondent for the New York Times, described Russia’s cyber operations in Ukraine in his 2020 book The Perfect Weapon: “This attack was about sending a message and sowing fear. . . . [It] demonstrated in the cyber realm what the Russians had already demonstrated in the physical world[:] they could get away with a lot, as long
as they used subtle, short of war tactics.” Sanger continued: “What happened in Ukraine confirmed the corollary to the Gerasimov doctrine: As long as cyber-induced paralysis was hard to see, and left little blood, it was difficult for any country to muster a robust response.”
Firing at the heart of Ukraine’s critical infrastructure, Russian cyber actors deployed the destructive malware BlackEnergy, KillDisk, and Industroyer against companies in Ukraine’s energy sector, its Ministry of Finance, and the State Treasury Service.
But Russia’s premier destructive cyber operations team that targeted Ukraine’s power grid and election system did not limit its operations to within the borders of Ukraine.
Self-styled after the colossal wormlike creatures in Frank Herbert’s Dune novels, the elite Russian military intelligence cyber team, dubbed “Sandworm,” has executed a campaign of malicious cyber activities targeting the Hillary Clinton presidential campaign (hack and leak), the country of Georgia (defacement of fifteen thousand websites), France’s elections (interference and malign influence), and the 2018 Winter Olympics (hack on Olympic IT infrastructure).
More recently, in the hours before Russian forces launched their invasion of Ukraine in February 2022, Russia’s military intelligence agency, or GRU, launched a cyberattack that took down Ukraine’s satellite communications in an attempt to sever the country’s ability to control its armed forces. While the U.S. and European governments reported the attack without attributing it to any specific actor, other sources told the New York Times that responsibility lay with the GRU.
The malware used in the attack, known as AcidRain, is a tool previously associated with Russian military intelligence and has been used to great effect. In this instance, Russian hackers exploited the land-based modems maintained by the California satellite company ViaSat, which operates satellite-based internet in parts of Ukraine. While the attack did not spill over to American targets, it nonetheless frightened American defense strategists, because it suggests what the Russians could do to communications systems inside the United States.
Outside of strictly government actors, Russia also incorporates its criminal enterprises into its strategic cyber warfare campaigns. Some of the most notorious hacking groups in the world
are criminal organizations that have connections with Russia’s three intelligence agencies albeit while maintaining a certain degree of autonomy (and plausible deniability for the Kremlin). On the whole, Russian hackers are far more prolific than those from any other nation. In 2021, 58 percent of cyberattacks worldwide—at least the ones that can be seen— originated in Russia. Further, the vast majority of ransomware attacks originate in Russia or former Soviet republics.
To understand how Russia employs its cyber underworld, it is helpful to draw an analogy. In the sixteenth century, England and France issued letters of “marque and reprisal” to enterprising private sea captains as a way to augment their nations’ foundering navies in the face of a much better resourced and equipped Spanish fleet. Letters of marque were the legal mechanism authorizing private vessels to target and capture ships of a named foreign country.
Letters of reprisal authorized those private vessels to take the captured ships back home for a reward. Combined, letters of marque and reprisal converted pirate vessels into naval auxiliaries, authorized to engage in acts of war on behalf of the sponsoring country.
While Spain plundered immeasurable wealth from the New World, English, Dutch, and French privateers attacked Spanish ships transporting treasure back to Spain from the Americas. By the end of the seventeenth century, Spain was no longer an unrivaled sea power. England and France developed world-class navies while Spain battled privateers.
Though privateering on the high seas was largely abolished by international law in the mid-nineteenth century, cyberspace bears no such prohibition.
Cyberspace offers America’s adversaries the ability to destabilize and inflict significant economic and sociopolitical damage on the United States with few repercussions, because international law has not evolved sufficiently to address this modality of combat.
Despite both sanctions by the U.S. government and public attribution by the United Kingdom, Canada, New Zealand, and Australia, malicious cyber activities conducted by actors both tightly and loosely affiliated with the Russian government have continued unabated.
While individual operations executed by Russian cyber forces or proxies may be strategically framed to stop short of
an “act of aggression,” “use of force,” or “armed attack” under international law, the consequences of the totality of Russia’s cyber campaigns have had farreaching effects on the peace and security of the international community. Russia’s continuous and concerted campaign has undermined the political institutions of global democratic states, created societal discord, and interfered with the governmental functions of other nations.
Like the United States, global victims of such cyber-enabled malicious activities have struggled to identify an effective extraterritorial response using existing rubrics of international law. While these low-intensity cyber operations may be in violation of a victim state’s domestic laws and may violate the traditional rule of state sovereignty—they do not inherently violate international law.
This is the primary challenge to waging effective cyber warfare: The United States struggles to effectively defend itself and deter adversary aggression without itself violating international law. And this is precisely the gap America’s adversaries are exploiting.
China
During the past decade, China has developed a particularly sophisticated digital footprint. This is a direct result of President Xi Jinping’s methodical reorganization of the country’s military and intelligence cyber forces to support his ambitions.
Beginning in 2012, Xi began reducing the size of China’s land army—a huge force China has for decades held out to the world as a strategic deterrent. In the process, China established within the People’s Liberation Army a new Strategic Support Force, which focuses on cyber, space, and electronic warfare.
“This reorganization has accelerated a shift in military posture from landbased territorial protection to extended power projection, with joint forces and technology as key enablers,” Winnona DeSombre, a research fellow at the Atlantic Council, testified before the U.S.–China Economic and Security Review Commission in February 2022. “The CCP [Chinese Communist Party] believes that the U.S. is more vulnerable in cyberspace, and that they can develop asymmetric capabilities
that would give them a distinct wartime advantage.”
Before Xi consolidated power in 2012 and 2013, the Chinese were not particularly skilled at penetrating the world’s communications and computing infrastructure.
“Back in the early 2000s, their hackers were so noisy. They were so loud,” Mandiant principal analyst Scott Henderson told us. “Their footprint was just massive. We could see them recycling aliases that had been exposed in previous operations to register new operational infrastructure. They would list their cities. They had no recognition that they should be avoiding attribution.” Attribution is the word used to describe how cybersecurity firms such as Mandiant or government agencies seek to make a positive identification of hacking groups. Now after years of effort, Henderson adds, “the Chinese have become masters of the game.”
While many policy papers refer to China as a “near-peer” competitor in terms of military capabilities, China’s sprint in cyberspace has raised its capabilities to be on par with the United States.
“The country’s offensive cyber capabilities rival or exceed those of the United States, and its cyber defensive capabilities are able to detect many U.S. operations—in some cases turning our own tools against us,” DeSombre testified. Moreover, Chinese cyber actors are not constrained by the many self-imposed restrictions the United States and other Western nations place on their cyber operations. Where the U.S. intelligence community is prohibited from using its considerable collection capabilities to steal intellectual property and economic data for the benefit of U.S. companies, the Chinese government openly encourages and rewards such activities.
The U.S. Department of Defense sees the fruits of China’s efforts in the near carbon copies of U.S. defense technologies being fielded by the Chinese military, “I like to think of this as being part of the historical cycles of war,” Henderson’s fellow principal analyst, Cristiana Kittner, told us. “A lot of the activity we see is traditional espionage, but it’s conducted over the internet. It’s not unprecedented intelligence gathering. It’s just a different means of getting it.”
China also rewards the front-end development of cyber weapons. For
instance, each year, the Chinese government hosts a series of hack-athons—competitions wherein hackers use tools they have developed to exploit unknown vulnerabilities in networks for cash prizes.
But instead of reporting these vulnerabilities to the software developers to fix them, the Chinese government stockpiles the newly acquired cyber weapons for use by its cyber actors against the world. Though these weapons have not yet been used in destructive cyberattacks, the U.S. intelligence community assesses that China “possesses substantial cyberattack capabilities . . . [and] can launch cyberattacks that, at a minimum, can cause localized, temporary disruption to critical infrastructure within the United States.”
In general, the visible way that nationstates attack one another is through sophisticated hacking groups identified by the cybersecurity industry as advanced persistent threats (APTs). But the connections between APTs and their government sponsors are rarely clear cut. In China’s case, most of the identified APTs are affiliated either with the Ministry of State Security (MSS), which is China’s equivalent of the U.S. Central Intelligence Agency, or with the Strategic Support Force within the People’s Liberation Army.
In 2021, Mandiant was tracking thirtysix of these individual Chinese entities conducting cyber operations worldwide. But the number of Chinese APTs likely far exceeds Mandiant’s count.
This is due to the nature of China’s cyber activities. Though China employs a significant number of hackers throughout its military and intelligence services, it also leverages contractors with more tenuous ties to the state. According to Mandiant, many more of the recently established Chinese assailants appear to be hybrids— hackers who work on one set of targets on behalf of the Chinese government during business hours but then at night use their tools to target others, seeking personal financial gain.
This creates challenges for victims because these actors, like the ransomware groups supporting the Russian government, inject further ambiguity, thus slowing down the ability of a victim to respond.
China’s cyber operations have become so sophisticated that in some cases its hackers are able to maintain access to a network for years.
For instance, an advanced piece of malware called Daxin gave Chinese hackers a backdoor into government
networks throughout the world for more than a decade before it was identified in 2022 by the cybersecurity firm Symantec.
“The newly discovered malware is no one-off,” the MIT Technology Review concluded. “It’s yet another sign that a decade-long quest to become a cyber superpower is paying off for China. While Beijing’s hackers were once known for simple smash-and-grab operations, the country is now among the best in the world thanks to a strategy of tightened control, big spending, and an infrastructure for feeding hacking tools to the government that is unlike anything else in the world.”
Chinese efforts to swamp U.S. systems have been obvious. In September 2020, the U.S. Cybersecurity and Infrastructure Security Agency (CISA) and the Federal Bureau of Investigation (FBI) revealed that China’s MSS was engaged in a massive operation on U.S. soil.
The ministry took advantage of the fact that a unit of the Department of Commerce called the National Institute of Standards and Technologies (NIST) routinely publishes an openly available list of thousands of “known vulnerabilities” in U.S. software systems.
NIST’s goal is to encourage companies and government agencies to fix the vulnerabilities by applying patches. However, even under the best of circumstances, those patches take time and human resources to install. As a result, too many companies and agencies are slow to implement them— or do not implement them at all. The end result was that the MSS took advantage of information disclosed by the U.S. government to identify and compromise countless American computing systems.
These threats notwithstanding, the United States continues to have one key advantage over China in cyberspace. This is due in large part to the early innovation and investments made by American companies in the nearly endless miles of the fiber-optic backbone.
The internet, the most commonly used electronic devices, and the most frequently accessed web search engines all ride on those fibers.
However, the Chinese Communist Party is well aware of these shortcomings and is investing heavily to overtake America’s digital hegemony. As a result, Chinese cyber actors have zeroed their laser focus on companies in the U.S. technology and innovation base. All of China’s actions
in cyberspace point to the strategic goal of establishing China as the sole global superpower—militarily, economically, and ideologically.
Looking Forward
In 2018, secretary of defense and retired four-star Marine Corps general James Mattis signed his name to the Department of Defense Cyber Strategy. This document outlined the “Defend Forward” strategy that has come to define how the Department of Defense executes cyber operations in defense of the homeland.
In the publicly released summary of the strategy, the Defense Department emphatically states, “We will defend forward to disrupt or halt malicious cyber activity at its source, including activity that falls below the level of armed conflict . . . by leveraging our focus outward to stop threats before they reach their targets.” In other words, since 2018, the United States has been on a war footing in cyberspace.
The 2018 strategy “set an important tone stressing just how serious these threats have become,” said U.S. Army general Paul Nakasone, the dual-hatted commander of U.S. Cyber Command and director of the National Security Agency. “It acknowledged that
defending the United States in cyberspace requires executing operations outside the U.S. military’s networks.”
In response to this bold strategy, U.S. Cyber Command began executing a much wider array of cyber operations. These operations included offensive cyber operations designed to disrupt and degrade adversary networks under new authorities granted by President Trump in a now hotly debated national security presidential memorandum, or NSPM, as well as less controversial— but far more impactful—defensive cyberspace operations executed worldwide.
The latter form of cyber operations goes by the apt moniker “Hunt Forward Operations,” wherein teams of military defensive cyber operations specialists deploy to foreign nations to harden their networks against penetration by adversaries.
In other words, teams of U.S. network defenders are given unfettered access by a host nation to their government networks known to be compromised by nation-state actors to root out their tools, backdoors, and malicious programs. This includes hunting for active intrusions by adversary cyber actors or
other indicators of compromise, as well as applying patches and updates to software and operating systems running on the host network. “The objective of the Hunt Forward Operations is to observe and identify malicious activity that threatens both nations and use those insights to bolster homeland defense and increase the resiliency of critical networks to shared cyber threats,” the U.S. Cyber Command public affairs office said in an official statement.
The end result of Hunt Forward Operations is both operational and strategic. During and after Hunt Forward Operations are conducted abroad, U.S. Cyber Command publishes malware developed and used by adversary hackers to public repositories, such as VirusTotal.com, wherein the diaspora of antivirus companies worldwide can access the data to create defenses for their customers. This effectively renders America’s adversaries’ most potent cyber weapons inert before they can be used to target U.S. networks. “For us, it isn’t just about hunting on our partner’s networks for similar threats to our networks and then bringing that back home to defend our nation’s networks,” said a hunt-forward team leader, whose name is withheld for security reasons. “It was also about the
GRAPHIC DESCRIPTION:
As cyber threats from global players like Russia and China become increasingly sophisticated, the importance of robust cybersecurity measures cannot be overstated. The graphic below illustrates the complexity and detail involved in modern cybersecurity defenses.
The image of intricately designed locks represents the multifaceted and layered approach necessary for effective cybersecurity. Each lock symbolizes a different layer of defense, from encryption and firewalls to intrusion detection systems and secure coding practices. Just as these locks protect valuable assets, comprehensive cybersecurity strategies are essential to safeguard against the ever-evolving landscape of cyber threats.
personal relationships we built, and the partnership we can grow.” The strategic impact of these operations is that they send the clear message that the United States will not permit its adversaries to have freedom of movement in the networks of sovereign nations in their near-abroad.
These operations began in 2018 shortly after the publication of the Defense Cyber Strategy and ahead of the 2018 U.S. midterm elections. As of August 2022, U.S. Cyber Command had
conducted thirty Hunt Forward Operations across the globe in 16 countries, including Estonia, Lithuania, Montenegro, North Macedonia, and Ukraine. Importantly, Hunt Forward Operations function as both an effective tool to bolster domestic cyber defense as well as a bulwark against adversaries targeting America’s allies.
In the battle for hearts and minds, a pound of sugar goes much further than a pinch of salt.
But while U.S. Cyber Command is notching victories abroad, America’s domestic cyberspace continues to face an uphill battle. The primary challenge in the federal government’s ability to protect American systems is the legal lines of demarcation that separate domestic space from foreign space.
For example, the NSA, with its incredible signals intelligence and cyber collection capabilities, is not legally permitted to collect information on citizens of the United States.
For the NSA to turn its considerable capabilities inward, it must do so pursuant to a warrant issued by a special court in accordance with the Foreign Intelligence Surveillance Act, more widely known by its acronym, FISA. The CIA is governed by the same restrictions. For the FBI to collect information domestically, it generally
does so pursuant to a warrant or subpoena consistent with its law enforcement role, or with the consent of the network owner pursuant to the Wiretap Act.
No warrant? No consent? No collection.
The Cybersecurity and Infrastructure Security Agency, or CISA, falls within the Department of Homeland Security and is designated as the lead agency for protecting U.S. critical infrastructure and federal government networks. But CISA lacks the authority to conduct offensive cyberspace operations or law enforcement functions beyond issuing administrative subpoenas. It is not yet empowered to hunt for threats beyond government networks.
For other response actions, CISA must work hand in hand with the FBI, which serves as the lead for both federal law enforcement and counterintelligence work inside the United States, and U.S. Cyber Command, which is the lead military organization for cyberspace operations. Attackers understand and regularly exploit these inherent gaps and institutional differences. When the SolarWinds compromise was revealed in 2020—at the same time that Microsoft’s Exchange email service was being exploited by Chinese hackers—
the attackers were able to escape scrutiny by the NSA because they launched their attacks from virtual servers located inside the United States. They actually took advantage of American law to protect themselves from scrutiny by the NSA or other U.S. government entities for long enough that they could carry out their attacks and then disappear by shutting down their operations and vanishing into the ether.
In a public congressional testimony, General Nakasone stressed that the SolarWinds hackers had taken advantage of the intelligence community’s “blind spot”—internet activity that occurs in domestic cyberspace. “Our adversaries understand that they can come into the United States and rapidly utilize an Internet service provider, come up and do their activities, and then take that down before a warrant can be issued, before we can actually have surveillance by a civilian authority here in the United States,” he told the Senate Armed Services Committee.
To further compound the issues, most government agencies rely on privatesector networks for the entirety of their supply chains. But the lack of trust between the public and private sectors is such that companies resist efforts by government agencies to fully vet their
cybersecurity posture or audit their networks to make sure no foreign entities are present.
The Department of Defense, for example, contracts with more than three hundred thousand American companies in its defense industrial base.
There are multiple tiers of these contractors with many small and medium-sized companies making key components that larger contractors then assemble into more complete systems.
Chinese entities, such as Unit 61398 of the People’s Liberation Army—are known to have penetrated the systems of smaller companies that do not employ sufficient numbers of IT personnel or buy the most secure systems.
The federal government is further hamstrung because each agency attempts to impose its own cyber rules on the industry it regulates. Glenn S. Gerstell, the former general counsel at the NSA, explained the problem in a New York Times essay in March 2022. “In just the past few months, the Department of Homeland Security’s Transportation Security Agency (TSA) announced new cyber requirements for pipelines and railroads; the Securities
and Exchange Commission voted on rules for investment advisers and funds; and the Federal Trade Commission threatened to legally pursue companies that fail to fix a newly detected software vulnerability found in many business applications. And on Capitol Hill, there are approximately 80 committees and subcommittees that claim jurisdiction over various aspects of cyber regulation.”
Then, in one of history’s great understatements, he added, “These scattered efforts are unlikely to reduce, let alone stop,” cyberattacks.
The reason is that Chinese and Russian cyber actors are actively burrowing into the networks of U.S. federal agencies, corporations, and critical infrastructure and are gaining and maintaining persistent access for when it is needed most.
“Make no mistake, America’s adversaries are fully engaged in a cyber war, and it is raging all around us”.
Bill Holstein
William
Holstein
stands as a beacon of excellence in journalism, with a career that has traversed continents and decades, providing unparalleled insights into global economics, politics, and technology. His illustrious journey began with United Press International, where he spent a decade covering pivotal events from Lansing, Michigan, to the bustling metropolises of New York, Hong Kong, and Beijing. His fearless reporting included coverage of the Soviet invasion of Afghanistan and the Vietnamese boat people crisis, experiences that earned him an Overseas Press Club award for best overseas economic reporting.
Holstein's tenure as World Editor at BusinessWeek further cemented his reputation. Over 11 years, he managed major cover stories that shaped the understanding of international economic dynamics, including the acclaimed "Rethinking Japan," which also won an Overseas Press Club award. His leadership at BusinessWeek was marked by a keen ability to distill complex global interactions into compelling narratives that resonated both domestically and internationally.
In addition to his editorial prowess, Holstein has been a prolific contributor to other prestigious publications such
as Fortune, The New York Times, and Business 2.0. His articles have spanned topics from the intricacies of Asian economies to the transformative impact of technology on global business practices. His seventh book, “How The ThinkPad Changed the World – And Is Shaping the Future,” underscores his deep understanding of technological evolution and its broader implications.
Holstein's first book, "The Japanese Power Game: What It Means to America," provided a critical analysis of U.S.-Japan relations during a period of significant economic and political change. This work received widespread acclaim for its insightful examination of the structural relationships between the two nations.
Beyond his writing, Holstein has dedicated himself to nurturing the next generation of journalists. As a board member and president of the Overseas Press Club Foundation, he has played a pivotal role in launching the careers of numerous successful correspondents through scholarships and fellowships.
William Holstein's career is a testament to the power of journalism to illuminate the complex and often opaque workings of the global economy and political landscape. His contributions have not only informed and educated but have also inspired a deeper understanding of the forces that shape our world.
THE DIGITAL EDGE THE AGE OF INNOVATION
Giuliano Liguori
Chief Executive Officer and Co-Founder Kenovy | Vice President CIO Club Italia
In an age marked by technological disruption and digital ubiquity, businesses find themselves at a crossroads. On one hand lies the path of obsolescence—treading the waters of the 'known,' clinging to outdated models and resisting change. On the other hand is 'The Digital Edge,' a realm where innovation flourishes, where emerging technologies like artificial intelligence, blockchain, and the Internet of Things are not just buzzwords but instrumental tools in reshaping the business landscape.
The Age of Innovation
As we venture into this dynamic landscape, you'll find that innovation is not merely a buzzword but a cornerstone of corporate strategy, a competitive differentiator, and a necessity for long-term success.
As the digital era continues its relentless march, traditional businesses are increasingly finding themselves at the crossroads—either adapt and innovate or face the risk of becoming obsolete. The wave of technological disruption has made innovation not just an option but a strategic imperative. This chapter will set the stage for thorough explorations into various aspects of innovation, from the role of technology and data to the importance of culture and leadership.
What You Will Find in This Article
• The New Competitive Landscape: An incisive look at the tectonic shifts in the business landscape brought about by digital transformation. We will dissect how globalization and the transition from an industrial to a knowledge economy have made innovation the linchpin of corporate strategy.
• A New Dawn: A contemplative discussion on the various catalysts that have set the stage for the Age of Innovation. Be it rapid technological advancements or seismic shifts in global consciousness, we will explore how these elements have synergistically created fertile grounds for innovation.
• Open Access to Knowledge: A comprehensive review of how the internet and open-source movements have democratized access to information, thus leveling the playing field and creating a globally inclusive environment conducive to innovation.
• The Innovation Imperative: An indepth analysis explaining why innovation has transitioned from being a nice-to-have to a must-have for corporate survival. We will delve into the drivers that make innovation an imperative and the risks of stagnation.
• The Anatomy of Innovation: A taxonomy of innovation, breaking down its various forms from incremental to breakthrough and disruptive. This section aims to equip you with the tools to understand and strategically employ different types of innovation.
• Fostering a Culture of Innovation: A guide filled with actionable insights on how to cultivate a culture that nurtures innovation. From executive buy-in to grassroots innovation, we will cover all facets of organizational culture.
• The Role of Technology in Innovation: A concluding section that amplifies the role technology plays in innovation today. From artificial intelligence (AI) to Industry 4.0, this section will delineate how modern technologies act as enablers in the innovation ecosystem.
Let The Journey
Begin
As we set out on this intellectual adventure, the primary aim is to equip you—whether you are a seasoned executive, a budding entrepreneur, or an individual contributor—with a robust conceptual toolkit. This toolkit will not only help you understand the mechanics of innovation but will also offer practical advice on navigating your way through its complexities.
So, fasten your seat belts as we delve into exploring how you can transform challenges into unprecedented opportunities, risks into valuable rewards, and abstract ideas into groundbreaking innovations. Welcome to the Age of Innovation!
1.1 The New Competitive Landscape: The Rules Have Changed
In an era where new startups can disrupt entire industries overnight, understanding the new competitive landscape is crucial for survival. The past few decades have seen a dramatic metamorphosis in how businesses function, owing largely to digital advancements. This shift has been catalyzed by several game-changing factors:
• Globalization: The opening up of markets and the proliferation of global trade have brought about increased competition as well as new opportunities. Companies now have the ability to tap into international markets and resources, making the world their playing field.
• Digitization: The transition from physical to digital has not just been a trend but a revolution. New business models have emerged that are solely digital, creating new revenue streams
and drastically changing customer expectations.
• Knowledge Economy: In today's world, intellectual capabilities are more valuable than capital or labor. Knowledge and innovation have become the new drivers of economic growth, placing a premium on skills like critical thinking, creativity, and problem-solving.
In light of these changes, innovation has transitioned from being a peripheral activity to becoming the central strategy for maintaining competitive advantage. Businesses that fail to innovate risk becoming irrelevant, losing market share to more agile competitors who can better meet the rapidly changing needs of customers.
1.1.1 A New Dawn: The Catalysts of the Age of Innovation
As we moved past the tumultuous events of the early 21st century, a new era emerged, one marked by an unprecedented rate of technological and societal change.
The Age of Innovation was fueled by a unique confluence of factors that created a fertile environment for creative thinking and problem-solving. Advancements in technology—from the Internet of Things (IoT) to artificial
intelligence (AI)—played a significant role. But it wasn't just about technology. Changes in global consciousness, increased focus on sustainability, and a move towards a more inclusive society contributed to setting the stage for this new era.
The Age of Innovation was unique because it was not solely led by scientists, technologists, or business leaders. It was a collective movement. Innovators came from all walks of life and multiple disciplines, each contributing their piece to the larger puzzle. The result was an era of unparalleled creativity and breakthrough thinking.
1.1.2 Open Access to Knowledge:
The Democratization Phenomenon One of the most defining features of the Age of Innovation was the unprecedented access to knowledge. The internet broke down geographical barriers, making it possible for anyone with a connection to access the world's information. This democratization was further accelerated by the open-source movement, which made software and educational resources freely available to anyone.
The implications of this were profound. It led to a surge in self-directed learning and a more level playing field, where your geographic location or
socio-economic status had less influence on your ability to innovate or start a new venture. It created a global talent pool, enriched by diverse perspectives and experiences, contributing to a more robust and inclusive environment for innovation.
1.2 The Innovation Imperative: Innovate or Perish
In today's competitive business landscape, standing still is the fastest way of moving backward. Innovation is no longer a luxury but a necessity for survival. Several forces drive this imperative:
• Customer Expectations: Today's consumers are more informed and have higher expectations than ever before. They demand personalized, seamless experiences and are willing to switch allegiance if their expectations are not met.
• Technological Advancements: The pace of technological change is accelerating, shortening the lifecycle of products and services. Businesses must keep up by continuously innovating.
• Increased Competition: Lower barriers to entry mean that new competitors can emerge overnight. In such a volatile environment,
continuous innovation is the only way to maintain a competitive edge.
In the following sections, we will delve deeper into each of these driving forces, exploring how they contribute to making innovation an imperative for modern businesses.
1.2.1 The Fruits of Innovation: A CrossSectoral Impact
Innovation is not confined to any single sector or industry; its impact is broad and cross-sectoral. In healthcare, we're seeing the rise of personalized medicine and tele-health services. In education, e-learning platforms and virtual classrooms are making quality education accessible to all. In manufacturing, Industry 4.0 technologies like IoT and robotics are revolutionizing production lines.
These are just a few examples. The point is that the fruits of innovation are all around us, shaping our lives in ways big and small. Whether it's the way we shop, how we communicate, or even how we date everything is being reimagined in innovative ways.
1.2.2 The Anatomy of Innovation: Understanding Its Many Forms
Innovation is not a monolithic concept; it comes in various shapes and sizes. Understanding the different types can
help businesses strategize more effectively. Here are some types of innovation:
• Incremental Innovation: This is about small, iterative changes that improve upon existing products, services, or processes. For most companies, this is the most common form of innovation.
• Breakthrough Innovation: These are the game changers, the innovations that mark a significant leap forward from existing solutions. They often create new markets or redefine existing ones.
• Disruptive Innovation: A term popularized by Clayton Christensen, disruptive innovations are those that displace existing market leaders by offering simpler, more affordable solutions. Each of these types of innovation requires different strategies, risk appetites, and resource allocations, and we'll delve deeper into how companies can navigate these in the chapters that follow.
1.3 Fostering a Culture of Innovation: Building the Innovation Engine
Creating a culture of innovation is easier said than done. It requires commitment from the top and engagement from the bottom. Here are some ways to foster a culture of innovation:
• Leadership Commitment: For innovation to thrive, it must be a strategic priority. This starts with the leadership team setting the vision, allocating resources, and removing barriers to innovation.
• Collaboration and Cross-Pollination:
One of the best ways to foster innovation is by encouraging crossdepartmental collaboration. Different perspectives often lead to the most creative solutions.
• Employee Empowerment: Employees should feel empowered to voice their ideas without fear of ridicule or reprisal. This creates a safe space for creativity and risk-taking.
1.3.1
Collaborative Innovation: A Global Movement
One of the defining features of the Age of Innovation is its collaborative nature. The most pressing challenges we face —climate change, poverty, inequality— require a collective effort to solve. This has led to an increase in public-private partnerships, open innovation platforms, and collaborative research initiatives.
In this networked age, no one can innovate in isolation. The challenges are too complex, and the solutions often lie at the intersections of
different disciplines, sectors, and geographies.
1.4 The Role of Technology: The Great Enabler
In the modern business landscape, technology serves as the foundational backbone, catalyzing the pace and scale of innovation. As we navigate through the Age of Innovation, it becomes increasingly evident that technology is not just an auxiliary tool but a pivotal enabler that equips startups to challenge established industry behemoths.
It is also the catalyst that allows businesses, both large and small, to scale their innovations to global markets, thereby transforming localized successes into universal solutions.
In this section, we will delve deeper into the various technological advancements that are shaping the future of innovation.
• Artificial Intelligence and Data Analytics: The explosion of data in today's world has been both a challenge and an opportunity for businesses. Artificial Intelligence (AI) and advanced data analytics tools have emerged as the saviors in this complex landscape. These technologies can sift
through terabytes of data to uncover hidden patterns, correlations, and insights that were previously impossible or extremely timeconsuming to identify. By leveraging these data-driven insights, businesses can make more informed decisions, optimize processes, and even predict future trends, thereby driving innovation in products, services, and strategies.
• Cloud Computing: Gone are the days when businesses needed to invest heavily in physical infrastructure to scale their operations. Cloud computing has democratized access to enterprise level infrastructure, making it easier and more cost-effective for companies to scale.
This technology enables real-time data access, collaboration, and computational power from anywhere in the world. The cloud's agility allows businesses to adapt quickly to market changes, experiment with new models, and implement innovations more swiftly, thus accelerating the pace of business transformation.
• Blockchain: Initially famous for being the technology behind cryptocurrencies like Bitcoin, blockchain has proven its utility far
beyond the realm of financial transactions. It provides a secure and transparent way to record transactions of any type, offering unparalleled levels of trust and integrity. From enhancing supply chain transparency to enabling secure, peer-to-peer transactions without the need for an intermediary, blockchain is opening new avenues for innovation in business operations, governance, and even social impact projects.
• Internet of Things (IoT): Although not mentioned in the original paragraph, the Internet of Things deserves a special mention. IoT technologies allow for interconnected devices that communicate and share data, providing businesses with real-time insights into operational efficiencies and customer behaviors. From smart homes to intelligent manufacturing systems, IoT is enabling a new wave of innovations that make our lives more comfortable, businesses more efficient, and societies more sustainable.
technologies are enabling businesses to protect their assets and customer
facilitating digital transformation.
• Cybersecurity: As businesses increasingly move online and data becomes more valuable, the importance of robust cybersecurity measures cannot be overstated. Innovations in cybersecurity
• Augmented and Virtual Reality (AR/ VR): These technologies are revolutionizing how information is consumed and experiences are delivered. Be it virtual training programs or interactive 3D advertisements, AR and VR are opening new frontiers in experiential innovation, transforming industries like education, healthcare, and marketing.
• 5G Networks: The rollout of 5G is set to revolutionize mobile connectivity speeds and bandwidth, providing the infrastructure necessary to support a host of new technologies, from autonomous vehicles to advanced telemedicine applications. This is expected to be a catalyst for innovations that require real-time data and ultra-reliable connections.
By embracing these technologies, businesses are not only enhancing their current operational efficiencies but are also investing in the future.
The incorporation of advanced technologies into business strategies is fast becoming the defining factor that distinguishes industry leaders from laggards. Thus, technology stands as a great equalizer and enabler, leveling the playing field and opening up opportunities for innovation in diverse sectors. It is the linchpin that holds together the complex web of modern innovation, ensuring that businesses can meet the challenges of today while preparing for the opportunities of tomorrow.
1.5 The Networked World: A Catalyst for Global Innovation
The rise of the internet and digital communication technologies has led to an increasingly interconnected
world. This networked world acts as a catalyst for innovation, breaking down geographical and cultural barriers and allowing ideas and knowledge to flow freely.
In this hyper-connected environment, innovation thrives. The traditional boundaries between sectors, disciplines, and even nations are becoming increasingly blurred, leading to a more collaborative and inclusive approach to innovation.
As we close this article, it's clear that we are living in a unique period of human history. The opportunities for innovation have never been greater, but neither have the challenges. It's a time of great risk but also of incredible potential. As we move forward, the businesses that will thrive are those that can navigate this complex landscape with agility and foresight.
So, are you ready to embrace the Age of Innovation?
Digital Transformation: A Strategic Approach
In an era where digital technologies are reshaping industries and customer expectations, understanding how to strategically approach digital transformation is a non-negotiable for modern businesses. This chapter serves as your comprehensive guide through the labyrinthine journey that digital transformation often is, with insights that are actionable, strategies that are proven, and advice that is timely.
• What is Digital Transformation?: We begin by demystifying what digital transformation truly encompasses. Beyond the buzzwords and technological jargon, we unpack how it involves an intricate blend of technology adoption, cultural shift, and business model innovations. It's a holistic approach to rethinking how business gets done in the digital age.
• The Four Stages of Digital Transformation: From laying the foundation to emerging as a digital leader, the process of digital transformation is a journey that can be broadly categorized into four critical stages. We delve into what each stage entails, the challenges to anticipate, and the strategies to employ for seamless navigation.
The Digital Nexus. This graphic represents the intricate and interconnected nature of key technologies driving digital transformation. The concentric circles symbolize the core elements such as AI, IoT, Cloud Computing, and Blockchain, all intricately linked by complex circuitry. It highlights how these technologies work together to create a seamless and integrated digital ecosystem, essential for modern business transformation.
• Key Technologies Driving Digital Transformation: No discussion of digital transformation would be complete without examining the technologies that act as its backbone. From cloud computing and arti f icial intelligence to the Internet of Things, we investigate how these technologies are not just facilitating but accelerating digital transformation across sectors.
• Digital Transformation Success Stories: Real-world case studies are invaluable in understanding the practical nuances of implementing digital transformation. We delve into a variety of success stories across industries, dissecting the strategies, tactics, and key decisions that made these transformations exemplary.
• Overcoming Barriers to Digital Transformation: Like any signi f icant change, digital transformation comes with its own set of hurdles. Whether it's resistance from within the organization, the challenges posed by legacy systems, or the ever-present threats to cybersecurity, we offer actionable advice on how to overcome these barriers effectively. Whether you're a business leader spearheading change, a manager tasked with implementation, or a team member eager to understand the bigger picture, this chapter will equip you with the necessary knowledge and tools to make your digital transformation journey a successful one. So, let's embark on this insightful journey through the multifaceted landscape of digital transformation, where we'll explore how to turn challenges into stepping stones, barriers into breakthroughs, and disruptions into opportunities. Welcome to the strategic world of Digital Transformation!
2.1 What is Digital Transformation?
In this article, we endeavor to provide a nuanced understanding of what digital transformation truly signi f ies in today's business landscape. Often misconstrued as merely a technological upgrade, digital transformation is, in fact, a complex, multi-dimensional paradigm shift that impacts every facet of an organization. It extends beyond the mere adoption of new digital tools to fundamentally rede f ine how businesses operate, deliver value to customers, and position themselves in an increasingly competitive market.
At its core, digital transformation involves the strategic incorporation of digital technologies and data analytics into the entire organizational ecosystem.
This includes everything from operations and supply chain management to customer engagement and human resources.
By doing so, companies can achieve a level of agility, responsiveness, and innovation that is indispensable in the modern digital economy.
The end goal is not just digital modernization but the creation of a digitally mature organization that can leverage technology to drive sustained growth and evolve in alignment with market dynamics.
However, the technological aspect is just the tip of the iceberg.
A successful digital transformation also necessitates a profound shift in organizational culture and mindset. This means fostering an environment where collaboration is the norm, where experimentation is encouraged, and where continuous learning is embedded into the corporate DNA. It requires leaders to break down silos, encourage crossfunctional teamwork, and build a culture that is adaptable, risktolerant, and oriented towards future-readiness. Additionally, the transformation journey must also involve a reassessment and possible overhaul of existing business models.
This could mean pivoting from traditional revenue streams to digital channels, or redesigning customer experiences based on data-driven insights. In essence, companies must be willing to deconstruct and rebuild
aspects of their existing business models to capitalize on digital capabilities.
Moreover, the ever-changing nature of customer expectations in the digital age adds another layer of complexity. Digital transformation equips businesses to be more responsive to these evolving demands. Whether it's through personalized customer experiences, seamless multi-channel interactions, or enhanced product and service offerings, the focus is on leveraging digital capabilities to not just meet but exceed customer expectations.
In summary, digital transformation is a holistic, organization-wide initiative that transcends technological adoption to include cultural shifts, process improvements, and business model innovations. It's about creating a future-ready organization that is not just surviving but thriving in the intricate and fast-paced digital ecosystem. So, as we delve deeper into this chapter, we will explore each of these dimensions in detail, providing you with a comprehensive roadmap for your own digital transformation journey.
2.2 The Four Stages of Digital Transformation
In this segment, we provide an indepth look at the sequential stages that constitute the lifecycle of a digital transformation journey. We aim to offer not just a taxonomy but also practical insights into the unique challenges and opportunities that each stage presents. These stages are designed to serve as milestones that guide organizations in successfully navigating the complex terrain of digital transformation.
1 . Foundation Building: Laying the Cornerstones
The first stage, Foundation Building, serves as the preparatory phase where organizations establish the essential infrastructure and capabilities required for a successful digital transformation. This entails modernizing IT systems, migrating to cloud-based solutions, and implementing a robust data analytics framework.
But the foundation is not just technological; it also involves cultivating a culture that is receptive to digital transformation. Leaders have a pivotal role here: they must articulate a clear vision, disseminate the importance of digital readiness, and incentivize employees to adopt new digital tools and methodologies.
The aim is to create an environment where the organization is not just digitally operable but also digitally native.
2 . Initiating Digital Transformation: Setting the Wheels in Motion
Once the foundational elements are securely in place, organizations can advance to the stage of Initiating Digital Transformation. Here, the focus shifts to identifying key digital initiatives that align closely with the company's strategic objectives.
Whether it's launching new digital products, optimizing operational ef f iciencies, or elevating customer engagement through personalized interactions, the initiatives should be targeted and meaningful. Importantly, this stage also involves setting up governance mechanisms and outlining Key Performance Indicators (KPIs) that will serve as navigational beacons to gauge the effectiveness and ROI of digital initiatives.
3 . Scaling Digital Transformation: From Pilot to Enterprise-wide Implementation
Having tested the waters with initial projects, organizations can then move to the Scaling Digital Transformation stage. At this
juncture, the learnings from pilot initiatives are translated into enterprise-wide applications. The methodology shifts to agile frameworks, and there is an emphasis on forming crossfunctional teams that break down organizational silos.
Companies leverage data-driven insights for iterative improvement and invest in scaling their digital capabilities. Additionally, this stage often involves forming alliances with external partners—be it technology vendors, industry consortiums, or even competitors—in a collaborative effort to innovate and co-create solutions.
4. Becoming a Digital Leader: The Pinnacle of Digital Maturity
The final stage Becoming a Digital Leader, is where an organization’s digital transformation efforts culminate in industry leadership.
At this level, companies are not just digitally transformed but are setting the benchmarks for digital excellence in their industry.
The focus is on perpetual innovation, a relentless commitment to adapting to technological advancements, and
an ingrained culture of continuous learning. Organizations that reach this stage derive substantial value from their digital investments and can pivot swiftly in response to market changes, thereby sustaining a competitive advantage that is hard to replicate.
In summary, each stage of the digital transformation journey presents its own set of complexities, requirements, and rewards.
However, navigating these stages successfully can yield transformative results, equipping organizations to thrive in the digital age. By understanding these stages in depth, organizations can develop a more structured, strategic approach to their digital transformation efforts, thereby maximizing their chances of long-term success.
2.3 Key Technologies Driving Digital Transformation
In this section, we go beyond merely listing the technologies that are instrumental in powering digital transformation. Instead, offer an indepth analysis into how each of these technologies—Cloud Computing, Arti f icial Intelligence (AI), and the Internet of Things (IoT) —is fundamentally rede f ining
business operations, customer interactions, and competitive dynamics across various industries. Cloud Computing: The Infrastructure Backbone Cloud computing has emerged as the backbone infrastructure for modern business operations, serving as a pivotal enabler of digital transformation. By allowing organizations to access computing resources and services over the internet, cloud platforms provide unparalleled f lexibility, scalability, and cost-ef f iciency. They serve as the underlying architecture where data can be stored, processed, and analyzed on an unprecedented scale.
Furthermore, the cloud enables rapid deployment of applications and services, allowing businesses to be agile and responsive to market changes. It serves as the critical f irst step in digital transformation, offering the infrastructure where other advanced technologies can be built and deployed.
Arti f icial Intelligence (AI): The Engine of Intelligent Decisionmaking Arti f icial Intelligence (AI) is more than just a buzzword; it's a suite of technologies—including machine learning, natural language processing, and computer vision—
that is propelling businesses into an era of intelligent decision-making. AI has the power to automate routine tasks, thereby freeing human resources to focus on more complex, value-added activities.
Beyond automation, AI offers data-driven insights that enhance decision-making processes, whether it's in predicting consumer behavior, optimizing supply chain dynamics, or personalizing customer experiences.
As businesses increasingly adopt AI, they f ind themselves better positioned to innovate, optimize, and compete. The technology serves as a linchpin for operational excellence and customer engagement, making it a central component of digital transformation.
Internet of Things (IoT): The Nervous System of Connected Operations
The Internet of Things (IoT) represents the next frontier in business optimization and customer engagement.
It consists of a sprawling network of interconnected devices, sensors, and systems that continuously collect, exchange, and analyze data. IoT serves as the nervous system for a new age of connected operations, offering real-time insights that can
be leveraged for various applications —from predictive maintenance and supply chain optimization to personalized customer experiences.
For example, smart sensors can monitor machinery in real-time, allowing for predictive maintenance that minimizes downtime. Similarly, IoT-enabled smart products can offer unparalleled user experiences, including customization and remote engagement.
The
Synergistic Interplay
Importantly, these technologies often work best when they are integrated in a synergistic manner. For instance, cloud computing provides the infrastructure where AI algorithms can run and IoT data can be stored and analyzed. As such, businesses aiming for a holistic digital transformation often f ind themselves adopting a combination of these technologies, each amplifying the capabilities of the others.
In conclusion, Cloud Computing, Artificial Intelligence, and the Internet of Things are not merely isolated technologies; they are interconnected pillars that
collectively enable comprehensive digital transformation.
By understanding the unique capabilities and applications of each technology, organizations can craft a more nuanced and effective digital transformation strategy, thereby positioning themselves for success in a digitally driven future.
2.4 Digital Transformation Success Stories
In this section, we delve into detailed case studies of enterprises that have successfully navigated the complex landscape of digital transformation. These real-world examples offer invaluable insights into the strategies, tactics, and best practices that can guide other organizations seeking to undertake a similar digital metamorphosis. Each case study underscores speci f ic elements that are crucial for digital transformation, such as strategic vision, cultural shift, technological adoption, and customer-centricity.
General Electric (GE): Mastering Industrial IoT with Predix General Electric's digital transformation journey is a standout example of how traditional industries can pivot into the digital age. GE invested heavily in developing its Predix platform, an
industrial Internet of Things (IoT) framework designed to collect and analyze data from an array of connected machines and systems.
This digital infrastructure has empowered GE to optimize operational ef f iciency, substantially reduce maintenance costs, and even spawn entirely new service lines for its customer base.
Key Takeaways:
• Leveraging Internal Expertise: GE capitalized on its deep industry knowledge to create a platform that addressed speci f ic industrial challenges.
• Strategic Partnerships: GE partnered with software and analytics companies to bolster its digital capabilities.
• Customer-Centric Approach: By focusing on solving customer problems, GE created a platform that delivers real value, thereby increasing customer loyalty and opening new revenue streams.
Nike: A Seamless Digital-Physical
Ecosystem Nike's transformation story exempli f ies how consumerfocused companies can integrate digital technologies across multiple facets of the business—from product
development and manufacturing to retail operations.
By embracing data analytics, mobile applications, and e-commerce platforms, Nike has succeeded in creating a seamless, personalized experience for its customers, both online and of f line.
Key Takeaways:
• Clear Digital Strategy: Nike had a well-de f ined roadmap that aligned digital initiatives with business objectives.
• Innovation Culture: The company fostered an environment where experimentation was encouraged, and failure was seen as a learning experience.
• Agile Approach: Nike’s agile methodology enabled it to adapt quickly to market trends and customer feedback, a key factor in its digital success.
DBS Bank: Becoming a Digital Vanguard in Financial Services DBS Bank, a leading f inancial institution in Asia, offers a compelling study in how service-based industries can undergo digital transformation. The bank embraced a digital- f irst approach, investing in cloud computing, arti f icial intelligence (AI),
and data analytics to modernize every facet of its operations.
This has allowed DBS to streamline internal processes, roll out innovative f inancial products, and enhance customer experiences, all while maintaining rigorous compliance and security standards.
Key Takeaways:
• Strong Leadership: Top-down commitment to digital transformation was a key factor in aligning the organization around the digital vision.
• Customer-Centric Mindset: DBS invested in understanding customer needs and behaviors, using insights to guide its transformation strategy.
• Culture of Continuous Learning and Innovation: An organizational culture that encouraged learning and innovation was instrumental in fostering an environment where new ideas could f lourish.
Each of these case studies demonstrates how a well-executed digital transformation strategy can yield signi f icant bene f its, including operational ef f iciencies, customer satisfaction, and competitive advantage. By examining the
success factors in each case, organizations can glean important lessons to inform their own digital transformation journeys.
2.5 Overcoming Barriers to Digital Transformation
The path to digital transformation is rarely linear or devoid of obstacles. In this culminating section, we unpack the common challenges that organizations frequently encounter in their quest for digital transformation. These hurdles can range from internal organizational resistance to technological constraints and security risks. Understanding these barriers and adopting strategies to overcome them is vital for ensuring the longterm success of any digital transformation initiative. Below, we delve into these challenges and offer actionable recommendations for each.
2.5.1 Organizational Resistance: Tackling the Human Element
The inertia against change can be the most formidable barrier in the journey of digital transformation. Employees
often resist adopting new technologies or workflows due to a fear of the unknown or concerns about job security.
Strategies for Overcoming Resistance:
• Transparent Communication: Leadership must articulate the rationale behind digital transformation and how it aligns with organizational goals.
• Inclusive Decision-Making: Involving employees in planning and decision-making processes can foster a sense of ownership and mitigate resistance.
• Comprehensive Training Programs: Providing adequate training and support can equip employees with the necessary skills and con f idence to adapt to new technologies.
2.5.2 Legacy Systems: The Anchor Weighing You Down
Legacy IT systems can be a signi f icant roadblock, as they are often incompatible with modern digital technologies. Transitioning away from these systems can be expensive and disruptive.
Strategies for Modernization:
• Phased Transition: Adopt a step-bystep approach to replace legacy systems, prioritizing those that most impact your digital objectives.
• Cost-Bene f it Analysis: Conduct a thorough analysis to gauge the longterm ROI of replacing legacy systems against the costs involved.
2.5.3 Cybersecurity: Safeguarding Digital Assets
As companies adopt a wider range of digital technologies, the threat landscape also expands, making them more susceptible to cyberattacks.
Strategies for Cyber Resilience:
• Robust Security Protocols: Employ multi-layered security measures, including f irewalls, encryption, and regular software updates.
• Policy and Compliance: Establish a comprehensive cybersecurity policy, ensuring compliance with industry regulations and standards.
2.5.4 Talent and Skills Gap: Bridging the Expertise Divide
Successfully implementing digital transformation initiatives requires organizations f ind challenging to source and retain.
Strategies for Talent Management:
• Upskilling and Reskilling: Invest in training programs that help your existing workforce adapt to new technologies.
• Collaborate with Educational Institutions: Partner with universities and online platforms to create tailored training programs.
2.5.5 Lack of Strategy and Governance: Steering the Ship
Without a clearly articulated digital strategy and governance framework, organizations risk rudderless initiatives and wasted investments.
Strategies for Effective Governance:
• Strategic Alignment : Ensure that your digital strategy is in sync with your overall business objectives.
• Governance Mechanism : Establish clear governance structures, roles, and responsibilities to oversee the implementation of digital initiatives.
Digital transformation is an imperative for modern businesses aiming to survive and thrive in an increasingly digital-first world. While the journey is fraught with challenges, understanding these potential pitfalls and adopting strategies to overcome them can significantly enhance the likelihood of success. By amalgamating technological adoption with strategic vision, human-centric approaches, and robust governance, organizations can navigate the complexities of digital transformation and emerge as industry leaders in the digital era.
Giuliano Liguori
is a visionary leader in the field of digital transformation, currently serving as the CEO and Co-Founder of Kenovy and the Vice President of CIO Club Italia. With over 20 years of experience, Liguori has established himself as a prominent figure in business process optimization, project management, and analytics. Under his leadership, Kenovy has been at the forefront of leveraging AI, machine learning, and automation to deliver innovative solutions that streamline operations, reduce costs, and enhance efficiency for businesses worldwide.
Recognized as one of the world’s top 200 business and technology
innovators by Engatica, Liguori's contributions to the technology sector are widely acknowledged. He continuously invests in research and development to ensure that Kenovy stays ahead of technological advancements, providing cuttingedge solutions to clients.
Liguori is also an active participant in the global tech community, frequently engaging with industry leaders and sharing his insights through various platforms. His work with both established companies and startups showcases his versatility and commitment to driving technological innovation.
Artificial Intelligence (AI) has rapidly become a buzzword, permeating nearly every aspect of our lives, from autonomous vehicles to personalized online shopping experiences. However, with the proliferation of AI comes a critical question: Is AI a revolutionary, enduring technology, or are we witnessing a bubble destined to burst?
We will delve into both perspectives, examining the evidence and arguments for AI as a sustainable innovation versus the potential for an over-hyped market correction.
1. The Case for AI as a Real Innovation
Proven Applications AI's transformative impact is evident in numerous fields. For instance, Google's DeepMind has developed algorithms capable of diagnosing eye diseases with over 90% accuracy, providing early detection and potentially saving millions of people from blindness.
In finance, AI systems are employed to detect fraudulent activities, manage risks, and optimize trading strategies. Companies like JPMorgan use AI algorithms to analyze vast datasets, identifying patterns that would be impossible for humans to discern. Similarly, AI-driven virtual assistants such as Apple's Siri and Amazon's Alexa have become integral parts of daily life, showcasing AI's capabilities in natural language processing and customer service.
Economic Impact
The economic contributions of AI are substantial. According to a report by PwC, AI could contribute up to $15.7 trillion to the global economy by 2030, driven by productivity gains and increased consumer demand. In the United States alone, AI-related industries are expected to add approximately 97 million jobs by 2030, offsetting job losses in other sectors due to automation.
AI's potential to revolutionize industries is further underscored by its applications in sectors like manufacturing and logistics, where predictive maintenance and supply chain optimization can lead to significant cost savings and efficiency improvements.
Technological Advancements
The rapid advancements in machine learning, deep learning, and neural networks have significantly enhanced AI's capabilities. Techniques like reinforcement learning and transfer learning enable AI systems to learn from minimal data, adapt to new tasks, and improve over time. The development of powerful hardware, such as GPUs and specialized AI chips, has also accelerated the pace of
innovation, allowing for more complex and computationally intensive models.
Furthermore, the availability of vast datasets and the growth of cloud computing have democratized access to AI technologies, enabling startups and small businesses to harness AI's potential.
Government and Corporate Investment
Governments and corporations worldwide are investing heavily in AI research and development. The U.S. government has committed billions of dollars to AI initiatives, including funding for AI research institutes and collaborations with the private sector. Similarly, China has announced ambitious plans to become the global leader in AI by 2030, with significant investments in AI infrastructure and talent development.
Major tech companies like Google, Microsoft, and Amazon are also at the forefront of AI innovation, investing in cutting-edge research and acquiring AI startups to bolster their capabilities. These investments underscore the confidence in AI's long-term potential and the belief that it will continue to drive economic growth and technological progress.
2. The Bubble Concerns Over-hyped
Expectations Despite the undeniable advancements, there is a growing concern that AI may be over-hyped, with inflated expectations that could lead to disappointment.
The media often portrays AI as an allencompassing solution capable of solving any problem, which can lead to unrealistic expectations. The term "AI winter" refers to periods of reduced interest and funding for AI research, typically following periods of overhyped expectations and subsequent disappointments.
Historical parallels can be drawn to the dot-com bubble of the late 1990s, where excessive speculation in internet-based companies led to a market collapse. Similar concerns are being raised about the current AI landscape, where startups with unproven technologies receive high valuations, and there is a rush to invest in AI without fully understanding its limitations.
Technical and Ethical Challenges
AI systems, while powerful, are not without limitations. Issues such as bias in AI algorithms, lack of transparency, and difficulties in interpretability pose significant challenges. For example, facial recognition systems have been
criticized for their inaccuracies and biases, particularly in identifying individuals from minority groups, leading to potential misuse and ethical concerns.
Additionally, the ethical implications of AI, such as job displacement due to automation and the potential for mass surveillance, raise important societal questions. As AI systems become more integrated into critical decision-making processes, ensuring fairness, accountability, and transparency becomes paramount.
Market Volatility
The AI market is characterized by significant volatility. While some companies are making genuine strides in AI technology, others may be riding the wave of hype without substantial innovation. The risk of a market correction looms large if the financial returns do not match the inflated expectations.
Investors and companies may also face challenges in scaling AI solutions due to technical complexities, regulatory hurdles, and the need for specialized talent. This could lead to a reevaluation of the market's true potential and a subsequent cooling of investment enthusiasm.
The Case for Ai as a Real Innovation; Proven Applications; Economic Impact; Technological Advancements; Government and Corporate Investment.
3. The Future of AI Sustainable Growth Potential Despite concerns, the core technologies underpinning AI are likely to see sustained growth. Fields like reinforcement learning, quantum computing, and neuromorphic computing hold promise for future breakthroughs. The continued expansion of AI applications in healthcare, education, transportation, and beyond suggests a broadening scope for innovation.
AI's potential to address global challenges, such as climate change and public health, also underscores its importance. For example, AI can optimize energy consumption, model climate patterns, and accelerate the discovery of new drugs, demonstrating its capacity to contribute to societal well-being.
Regulation and Governance
The role of regulation and governance in AI development cannot be understated. Governments and international organizations are beginning to establish frameworks to ensure the ethical and responsible use
of AI. The European Union's General Data Protection Regulation (GDPR) and the proposed AI Act aim to set standards for AI transparency and accountability, serving as models for other regions.
Effective regulation can help mitigate the risks associated with AI, such as privacy violations and discrimination, while fostering innovation by providing clear guidelines for developers and companies.
The
Bubble Concerns: Overhyped Expectations; Technical and Ethical Challenges, Market Volatility
Adaptation and Resilience
As AI continues to evolve, industries and the workforce must adapt. Education and training programs are crucial for equipping individuals with the skills needed to thrive in an AIdriven economy. Governments and companies are investing in re-skilling and up-skilling initiatives to help workers transition to new roles and industries.
Moreover, fostering a culture of innovation and collaboration will be essential for navigating the challenges and opportunities presented by AI. Partnerships between academia, industry, and government can accelerate the development and
deployment of AI technologies, ensuring they are aligned with societal needs and values.
In weighing the evidence, it becomes clear that AI is more than just a passing trend. While there are legitimate concerns about over-hyped expectations and market volatility, the technological advancements, proven applications, and substantial economic impact of AI suggest it is a real and transformative innovation.
The potential risks associated with AI, including ethical challenges and market corrections, must be carefully managed through robust regulation and governance.
Ultimately, AI's future will depend on continued investment in research, a commitment to ethical practices, and a proactive approach to addressing its societal implications.
As we navigate this complex landscape, a balanced perspective that recognizes both the potential and the pitfalls of AI will be crucial in ensuring its sustainable and beneficial development.
Artificial Intelligence and National Security In the rapidly evolving landscape of global security, artificial intelligence (AI) has emerged as a transformative force, reshaping the way nations approach defense, intelligence, and strategic planning. As countries around the world invest heavily in AI technologies, the implications for national security are profound and far-reaching. We explore the multifaceted impact of AI on national security, examining its advantages, disadvantages, challenges, and potential future developments.
Advantages of AI in National Security
1. Enhanced Intelligence Gathering and Analysis. One of the most significant advantages of AI in national security is its ability to process and analyze vast amounts of data quickly and efficiently. Intelligence agencies generate enormous quantities of information from various sources, including satellite imagery, communications intercepts, and open-source intelligence. AI-powered systems can sift through this data, identifying patterns, anomalies, and potential threats that human analysts might miss.
For example, machine learning algorithms can analyze satellite images to detect changes in military installations or unusual troop movements. Natural language processing (NLP) models can scan millions of social media posts to identify emerging security threats or track the spread of disinformation campaigns. This enhanced analytical capability allows security agencies to make more informed decisions and respond more quickly to potential threats.
2. Predictive Analytics and Threat Assessment AI's predictive capabilities offer valuable support for national security planning and strategy development. By analyzing historical data and current trends, AI models can forecast potential security risks, geopolitical developments, and emerging threats. This enables security agencies to engage in more sophisticated scenario planning, anticipating possible attacks or crises and preparing appropriate responses.
For instance, AI systems could model the potential impact of various factors - such as economic instability, climate change, or political unrest - on regional security, helping policymakers develop more effective longterm security strategies.
3. Cybersecurity and Network Defense As cyber threats become increasingly sophisticated, AI plays a crucial role in defending critical infrastructure and
sensitive networks. Machine learning algorithms can monitor network traffic in realtime, detecting anomalies and potential intrusions far more quickly and accurately than traditional security systems.
AI-powered threat intelligence platforms can analyze global cyber threat data to predict and prevent future attacks.
Moreover, AI can automate many aspects of cybersecurity, such as patch management and vulnerability assessment, reducing the workload on human security teams and allowing them to focus on more complex security challenges.
4. Autonomous and Semi-Autonomous Systems AI enables the development of autonomous and semi-autonomous systems that can operate in environments too dangerous or inaccessible for humans. This includes unmanned aerial vehicles (UAVs) for reconnaissance and surveillance, autonomous underwater vehicles for naval operations, and robotic systems for explosive ordnance disposal.
These AI-powered systems can enhance military capabilities while reducing risks to human personnel. They can also operate for extended periods in harsh environments, providing persistent surveillance and intelligence gathering capabilities.
5. Decision Support in Crisis Situations
In times of crisis or conflict, AI can provide rapid analysis of developing situations, helping military and political leaders make informed decisions under pressure. Machine learning algorithms can process real-time data from multiple sources, providing a comprehensive picture of the battlefield or crisis zone. This can lead to more effective tactical decisions and potentially reduce casualties.
Disadvantages and Risks
1. Vulnerability to Adversarial Attacks
While AI systems can enhance cybersecurity, they are also vulnerable to sophisticated adversarial attacks. Malicious actors can potentially manipulate the input data or exploit weaknesses in AI algorithms to deceive these systems. For example, subtle alterations to images could fool AIpowered surveillance systems, or carefully crafted text could bypass AI content filters.
This vulnerability could lead to serious security breaches or misinformation campaigns that undermine national security efforts.
2.
Potential for Autonomous Weapons Systems The development of fully autonomous weapons systems, often referred to as "killer robots," raises significant ethical and security concerns. These AI-powered systems could potentially select and engage targets without meaningful human control, raising questions about accountability and the potential for unintended escalation of conflicts.
The prospect of autonomous weapons systems also creates new arms race dynamics, as nations compete to develop increasingly sophisticated AIpowered military technologies.
3. Over-reliance on AI and Erosion of Human Expertise As AI systems become more capable, there's a risk that security agencies may become overly reliant on these technologies, potentially eroding human expertise and judgment. Critical thinking, intuition, and the ability to understand complex geopolitical contexts are uniquely human skills that remain crucial in national security decisionmaking.
Over-dependence on AI could lead to a false sense of security or blind spots in threat assessment if the AI systems fail to account for novel or unprecedented scenarios.
4
. Data Privacy and Civil Liberties Concerns The use of AI in national security often involves processing vast amounts of data, including information about citizens. This raises significant concerns about data privacy and civil liberties. The powerful surveillance capabilities enabled by AI could potentially be misused for political control or suppression of dissent, particularly in authoritarian regimes.
Balancing national security needs with individual privacy rights will be an ongoing challenge as AI technologies become more pervasive.
5. Potential for Escalation and Misunderstanding
In crisis situations, the speed and automation of AI-powered systems could potentially lead to rapid escalation of conflicts. If multiple nations employ AI for military decisionmaking, there is a risk of creating feedback loops where AI systems react to each other's outputs, potentially escalating tensions before human leaders can intervene.
Additionally, the "black box" nature of some AI algorithms could make it difficult to understand how they arrive at specific recommendations, potentially leading to misunderstandings or mistrust
between nations.
Challenges in Implementing AI for National Security
1. Ethical and Legal Frameworks
The rapid advancement of AI in national security outpaces existing ethical and legal frameworks. Developing comprehensive guidelines for the responsible use of AI in defense and intelligence operations is a complex challenge. Questions arise about the appropriate level of human control over AI systems, the accountability for AI-driven decisions, and the potential consequences of AIpowered security operations on human rights and international law.
2. Data Quality and Bias
The effectiveness of AI systems in national security depends heavily on the quality and representativeness of the data they are trained on. Biased or incomplete data could lead to flawed analysis and potentially discriminatory outcomes. Ensuring that AI systems have access to high quality, diverse, and unbiased data sets is a significant challenge, particularly given the sensitive nature of much national security information.
3. Interoperability and Standardization
As different agencies and allied nations develop their own AI systems for security purposes, ensuring interoperability between these systems becomes crucial. Establishing common standards and protocols for AI in national security applications will require extensive international collaboration and negotiation.
4. Workforce Development and Adaptation
Integrating AI into national security operations requires a workforce with new skill sets. Security agencies need personnel who can develop, deploy, and interpret AI systems while also understanding the broader strategic and operational contexts. Recruiting and training this new generation of security professionals presents a significant challenge.
5. Adversarial AI and the AI Arms Race
As nations invest in AI for national security, they must also prepare for adversaries using similar technologies. This creates a new dimension of competition, where countries race to develop more advanced AI capabilities while also working to defend against and potentially exploit weaknesses in adversaries' AI systems.
6. Explainability and Transparency
Many advanced AI systems, particularly deep learning models, operate as "black boxes," making it difficult to understand how they arrive at specific conclusions or recommendations. In the context of national security, where decisions can have life-or-death consequences, the lack of explainability in AI systems poses significant challenges. Developing AI models that can provide clear explanations for their outputs while maintaining their predictive power is an ongoing area of research.
Future Prospects for AI in National Security
1. Quantum AI and Cryptography
As quantum computing technology matures, it could dramatically enhance the capabilities of AI in national security. Quantum-powered AI systems could break current encryption methods, necessitating the development of new quantum-resistant cryptography. Conversely, quantum AI could also enable unbreakable encryption methods, revolutionizing secure communications for intelligence and military operations.
2. AI-Enhanced Wargaming and Simulation
Advanced AI systems could transform military planning and training through highly sophisticated wargaming and simulation capabilities. These systems could model complex geopolitical scenarios, allowing military strategists to test various approaches and better prepare for a wide range of potential conflicts or crises.
3. Cognitive Electronic Warfare AI is likely to play an increasingly important role in electronic warfare. AI powered systems could engage in real-time spectrum analysis, rapidly identifying and countering enemy communications and radar systems. This could lead to a new era of cognitive electronic warfare, where AI systems engage in complex, dynamic electromagnetic battles.
4. Swarm Intelligence and Coordinated Autonomous Systems
Future military operations may involve large swarms of autonomous drones or robots coordinated by AI systems. These swarms could perform complex tasks such as reconnaissance, area denial, or even coordinated attacks, presenting new challenges and opportunities for military strategists.
5. AI-Driven Threat Prediction and Prevention As AI systems become more sophisticated, they may be able to predict potential security threats with unprecedented accuracy. By analyzing vast amounts of data from diverse sources, AI could identify early warning signs of terrorist activities, cyber attacks, or geopolitical crises, allowing for more proactive security measures.
6. Human-AI Teaming The future of national security is likely to involve close collaboration between human operators and AI systems. This could include AI enhanced decision support systems for commanders, AI co-pilots for fighter jets, or AI assistants for intelligence analysts. Developing effective human-AI teaming protocols and interfaces will be a key area of focus.
7. AI Ethics and International Cooperation As AI becomes more central to national security, we may see the emergence of new international bodies and agreements focused on AI governance in defense and intelligence. This could include treaties on the use of AI in warfare, similar to existing arms control agreements, as
well as collaborative efforts to address global security challenges using AI.
The integration of AI into national security presents a complex landscape of opportunities and challenges. While AI offers significant advantages in terms of data analysis, predictive capabilities, and enhanced defense systems, it also raises important concerns about ethics, privacy, and the potential for unintended escalation of conflicts.
As we move forward, it will be crucial for nations to collaborate on developing ethical guidelines, regulatory frameworks, and best practices for the use of AI in national security. The goal should be to harness the power of AI to enhance security and stability while mitigating risks and preserving human rights and international law.
Balancing the potential of AI with its risks will require ongoing dialogue, adaptation, and cooperation among nations. As AI continues to evolve, so too must our approach to its use in the critical domain of national security. By addressing the challenges head-on and thoughtfully leveraging the advantages of AI, we can strive for a safer and more stable global environment.
Prof. Ahmed Banafa
is a distinguished expert in IoT,
Blockchain, Cybersecurity, and AI with a strong background in research, operations, and management. He has been recognized for his outstanding contributions, receiving the Certificate of Honor from the City and County of San Francisco, the Haskell Award for Distinguished Teaching from the University of Massachusetts Lowell, and the Author & Artist Award from San Jose State University. LinkedIn named him the No.1 tech voice to follow in 2018, acknowledging his predictive insights and influence. His groundbreaking research has been featured in renowned publications like Forbes, IEEE, and the MIT Technology Review. He has been interviewed by major media outlets including ABC, CBS, NBC, CNN, BBC, NPR, NHK, and FOX. Being a member of the MIT Technology Review Global Panel further highlights his prominence in the tech community.
Prof. Banafa is an accomplished author known for impactful books. His work "Secure and Smart Internet of Things (IoT) using Blockchain and Artificial Intelligence (AI)" earned him the San Jose State University Author and Artist Award and recognition as one of the Best Technology Books of All Time and Best AI Models Books of All Time. His book on "Blockchain Technology and Applications" also received acclaim and is integrated into curricula at prestigious institutions like Stanford University. Additionally, he has contributed significantly to Quantum Computing through his third book, and he is preparing to release his fourth book on Artificial Intelligence in 2023. Prof. Banafa's educational journey includes Cybersecurity studies at Harvard University and Digital Transformation studies at the Massachusetts Institute of Technology (MIT), he is holding a Master's Degree in Electrical Engineering and a PhD in Artificial Intelligence.
SHADOW AI AND DATA POISONING IN LARGE LANGUAGE MODELS:
IMPLICATIONS FOR GLOBAL SECURITY AND MITIGATION STRATEGIES
Dr. Igor van Gemert Expert on Generative AI and Cyber Resilience
1. Introduction: The Emergence of Shadow AI in the Era of Large Language Models
In recent years, the rapid development and widespread adoption of artificial intelligence (AI) have brought about a revolution in technological capabilities. Among the most transformative advancements are Large Language Models (LLMs), which have ushered in a new era of possibilities. However, alongside this progress, a significant security concern has emerged, known as "shadow AI." This phenomenon involves the unauthorized and uncontrolled use of AI tools and systems within organizations, often without proper oversight from IT or security departments.
Imagine a scenario where employees, driven by the ease of access to powerful AI tools like ChatGPT, begin adopting these solutions for their work without going through official channels. ChatGPT, for instance, garnered an astounding 100 million weekly users within a year of its launch, making it simple for individuals to integrate AI into their workflows. This accessibility distinguishes shadow AI from traditional shadow IT, making it more pervasive and challenging to detect.
As organizations navigate the complexities of shadow AI, they must also contend with the growing threat of data poisoning attacks. These attacks target the training data of AI models, including LLMs, introducing vulnerabilities, backdoors, or biases that can compromise the security, effectiveness, and ethical behavior of these models. This exploration delves into the intricacies of shadow AI and data poisoning, examining their potential impacts on global security, the challenges in detection and mitigation, and strategies for addressing these emerging threats.
2. Understanding Shadow AI: Definitions and Implications Shadow AI refers to the use of AI tools and technologies within an organization without the knowledge, approval, or oversight of the IT department or relevant authorities. Picture an employee using public AI services like ChatGPT for work-related tasks, deploying AI models or algorithms without proper vetting, integrating AI capabilities into existing systems without authorization, or developing AI applications independently within departments without central coordination.
Several factors contribute to the rise of shadow AI. The accessibility of AI tools has lowered the barrier to entry for non-technical users, allowing them to harness the power of AI without requiring specialized knowledge. The rapid advancement of AI technologies often outpaces organizational policies and governance structures, creating a gap between innovation and regulation. Employees, driven by the perceived productivity gains, may turn to AI tools to enhance their efficiency and output. However, many users may not fully understand the risks associated with unauthorized AI use, leading to unintended consequences.
The implications of shadow AI for organizations are far-reaching. Security risks loom large, as unauthorized AI use can introduce vulnerabilities and expose sensitive data. Compliance issues arise when shadow AI practices violate regulatory requirements, leading to legal and financial repercussions. Data privacy concerns mount, as AI tools may process and store sensitive information in ways that contravene data protection laws. Uncoordinated AI use can result in inconsistent outputs and decision making across an organization, while resource inefficiencies stem from duplicate efforts and incompatible systems.
3. The Mechanics of Data Poisoning in Large Language Models. Data
poisoning
is a type of attack that targets the training data of AI models, including LLMs. By manipulating the training data, attackers can introduce vulnerabilities, backdoors, or biases that compromise the security, effectiveness, and ethical behavior of the model. Imagine a scenario where an attacker injects mislabeled or malicious data into the training set, causing the model to produce specific outputs when encountering certain triggers. This type of attack is known as label poisoning or backdoor poisoning.
Another form of data poisoning involves modifying a significant portion of the training data to influence the model's learning process. This can entail injecting biased or false information into the training corpus, skewing the model's outputs. Model inversion attacks, although not strictly poisoning attacks, exploit the model's responses to infer sensitive information about its training data, which can be used
in conjunction with other methods to refine poisoning strategies. Stealth attacks involve strategically manipulating the training data to create hard-to-detect vulnerabilities that can be exploited after deployment, preserving the model's overall performance while introducing specific weaknesses.
The process of poisoning an LLM typically involves several steps. Attackers first gather or generate a set of malicious training samples. For backdoor attacks
a trigger (such as a specific phrase or pattern) is crafted to activate the poisoned behavior. The poisoned samples are then introduced into the training dataset, either during initial training or fine tuning. The LLM is trained or fine-tuned on the contaminated dataset, incorporating the malicious patterns. Once deployed, the poisoned model can be exploited by inputting the trigger or leveraging the introduced vulnerabilities.
Detecting data poisoning in LLMs presents several challenges. The scale of training data for LLMs is massive, making comprehensive inspection impractical. The complexity of these models adds another layer of difficulty, as it is challenging to trace the impact of individual training samples. Advanced poisoning methods can be designed to evade detection by maintaining overall model performance, while the "black box" nature of deep learning models complicates efforts to identify anomalous behaviors.
4. Global Security Implications of Shadow AI and Data Poisoning.
The combination of shadow AI and data poisoning poses significant risks to global security across various domains. Imagine a scenario where poisoned LLMs deployed through shadow AI channels generate vast amounts of coherent, persuasive misinformation. Research by Zellers et al. (2019) demonstrated how GPT-2, a precursor to more advanced models, could generate fake news articles that humans found convincing. Such capabilities could undermine democratic processes through targeted disinformation, erode public trust in institutions and media, and exacerbate social and political divisions. As AI systems become integrated into critical infrastructure,
shadow AI and data poisoning could lead to subtle manipulations with potentially catastrophic consequences. A study by Kang et al. (2021) explored the potential impact of AI-driven attacks on power grids, highlighting the need for robust security measures. Disruption of energy distribution systems, compromise of transportation networks, and interference with financial markets and trading systems are among the potential impacts.
In the realm of national security and intelligence, the use of compromised LLMs in intelligence analysis could lead to flawed strategic assessments and policy decisions based on manipulated information. A report by the RAND
Corporation (2020) emphasized the potential for AI to transform intelligence analysis, underscoring the importance of securing these systems.
Misallocation of defense resources based on false intelligence, erosion of diplomatic relations due to AIgenerated misunderstandings, and vulnerability of classified information to extraction through poisoned models are critical concerns.
Shadow AI practices can inadvertently expose sensitive data to unauthorized AI systems, while data
poisoning can create new attack vectors for cybercriminals.
Increased risk of data breaches and intellectual property theft, exploitation of AI vulnerabilities for network intrusions, and compromise of personal privacy through model inversion attacks are potential outcomes.
The financial sector's increasing reliance on AI for trading, risk assessment, and fraud detection makes it particularly vulnerable to shadow AI and data poisoning threats. Market manipulation through poisoned trading algorithms, erosion of trust in financial institutions due to AI-driven errors, and potential for large-scale
economic disruptions are significant risks.
5. Challenges in Detecting and Mitigating Shadow AI and Data Poisoning
Addressing the threats posed by shadow AI and data poisoning presents numerous challenges. The sheer size and complexity of modern LLMs make comprehensive security audits computationally intensive and timeconsuming.
For instance, GPT-3, one of the largest language models, has 175 billion parameters, making it extremely challenging to analyze thoroughly. This difficulty in identifying all potential vulnerabilities, coupled with high computational costs for security assessments and challenges in realtime monitoring of model behaviors, underscores the scale of the problem.
The lack of interpretability in deep neural networks, often referred to as the "black box" problem, makes it challenging to trace decision-making processes and identify anomalous behaviors.
This difficulty in distinguishing between legitimate model improvements and malicious alterations, explaining model decisions for regulatory compliance, and identifying the source and extent of data poisoning adds another layer of complexity.
The rapid evolution of AI technologies often outpaces the creation of governance frameworks and security measures.
This constant need to update security protocols and best practices, coupled with challenges in developing standardized security measures across different AI architectures and maintaining up-to-date expertise among security professionals, highlights the dynamic nature of the threat landscape.
The vast amounts of data used to train LLMs make it challenging to vet and validate all sources, increasing the risk of incorporating poisoned data. The impracticality of manual data inspection, difficulty in establishing provenance for all training data, and challenges in maintaining data quality while preserving diversity further complicate the situation.
Organizations face the challenge of fostering AI innovation while maintaining robust security measures, often leading to tensions between development teams and security departments. The risk of stifling innovation through overly restrictive security policies, potential for shadow AI adoption as a workaround to security measures, and the need for cultural shifts to integrate security into the AI development process are critical considerations.
6. Mitigation Strategies and Best Practices
To address the risks associated with shadow AI and data poisoning, organizations should consider implementing a comprehensive set of mitigation strategies. Establishing clear guidelines for AI deployment and usage within the organization is crucial. This includes creating processes for requesting and approving AI projects, defining roles and responsibilities for AI oversight, establishing ethical guidelines for AI development and use, and implementing regular policy reviews to keep pace with technological advancement.
Creating a designated team responsible for overseeing AI projects can help ensure compliance with security and privacy policies. This team
should review and approve AI initiatives across the organization, conduct risk assessments for proposed AI deployments, monitor ongoing AI projects for potential security issues, and serve as a central point of
expertise for AI-related questions and concerns.
Implementing robust data validation techniques is essential to mitigate the risk of data poisoning. This includes conducting statistical analysis to
identify anomalies in training data, implementing anomaly detection algorithms to flag suspicious data points, using clustering techniques to identify and isolate potentially malicious samples, and establishing clear data provenance while maintaining detailed records of data sources.
Performing ongoing evaluations to identify unauthorized AI deployments and potential vulnerabilities is crucial. This involves conducting network scans to detect unauthorized AI tools and services, performing penetration testing on AI systems to identify vulnerabilities, analyzing model outputs for signs of poisoning or unexpected behaviors, and reviewing access logs and user activities related to AI systems.
Educating employees about the risks associated with shadow AI and the importance of following organizational protocols for AI usage is vital. Training programs should cover the potential risks and consequences of unauthorized AI use, proper procedures for requesting and implementing AI solutions, best practices for data handling and privacy protection, and recognition of potential signs of data poisoning or model compromise.
Using identity and access management solutions to restrict access to AI tools and platforms based on user roles and responsibilities can help prevent unauthorized use. This includes implementing multi-factor authentication for AI system access, using role-based access control (RBAC) to limit system privileges, monitoring and logging all interactions with AI systems, and implementing data loss prevention (DLP) tools to protect sensitive information.
Creating sophisticated tools to analyze internal representations and decision processes of LLMs is crucial for detecting potential compromises. This involves leveraging techniques from explainable AI research to improve model interpretability, developing methods for visualizing and analyzing neural network activations, creating tools for comparing model behaviors across different versions and training runs, and implementing continuous monitoring systems to detect anomalous model outputs.
Developing global standards for AI development and deployment, including certification processes for AI systems used in critical applications, is essential for addressing the global nature of AI threats. Participating in international forums and working groups on AI security, collaborating
with academic institutions and research organizations, sharing threat intelligence and best practices across borders, and advocating for harmonized regulatory frameworks for AI governance are key steps.
7. Ethical and Legal Considerations
The rise of shadow AI and the threat of data poisoning raise complex ethical and legal questions that organizations must address. Determining responsibility for the actions of AI systems with hidden capabilities is challenging, particularly when the line between developer intent and emergent behavior is blurred. Establishing clear lines of responsibility for AI system outputs, developing frameworks for assessing liability in cases of AI-related harm, and considering the role of insurance in mitigating risks associated with AI deployment are critical considerations.
Balancing the need for transparency in AI development with concerns about data privacy and intellectual property protection is crucial. Compliance with data protection regulations such as GDPR and CCPA, ethical use of personal data in AI training and deployment, and protecting proprietary algorithms and model architectures while ensuring
transparency are key aspects to consider.
Evolving ethical guidelines for AI research and development to address the unique challenges posed by potential shadow capabilities is necessary. Developing codes of conduct for AI researchers and developers, implementing ethics review boards for AI projects, and considering the long-term societal impacts of AI technologies are essential steps.
8. Future Horizons: Emerging Technologies and Long-term
Implications As we look to the future, several emerging technologies and trends will shape the landscape of AI security. Exploring the potential of quantum algorithms for more robust AI security testing and potentially quantum resistant AI architectures is an active area of research. Quantum enhanced encryption for AI model protection, quantum algorithms for faster and more comprehensive security audits, and the development of quantum-resistant AI architectures are potential developments.
Investigating brain-inspired computing architectures that might offer inherent protections against certain types of attacks or provide new insights into creating more interpretable AI systems
is promising. AI systems with improved resilience to adversarial attacks, more efficient and interpretable AI models inspired by biological neural networks, and novel approaches to anomaly detection based on neuromorphic principles are potential developments.
Considering how current security challenges might evolve in the context of more advanced AI systems approaching artificial general intelligence is crucial. The work of Bostrom (2014) on super intelligence provides a framework for considering long-term AI safety. Increased complexity in securing systems with human-level or superhuman capabilities, ethical considerations surrounding the rights and responsibilities of AGI systems, and the potential for rapid and unpredictable advancements in AI capabilities are significant implications.
The advent of shadow AI and the insidious threat of data poisoning in Large Language Models (LLMs) represent more than just technical challenges—they signify profound risks to global security, economic stability, and societal trust. In a world increasingly reliant on AI-driven decisions, the unchecked proliferation of shadow AI can undermine the very foundations of organizational integrity and operational security. Meanwhile,
the specter of data poisoning looms large, threatening to compromise not just individual models but the ecosystems that depend on their reliability.
Consider the ramifications: poisoned LLMs could generate sophisticated misinformation campaigns, destabilize critical infrastructure, and corrupt national security intelligence. These aren't abstract risks—they are present and escalating dangers that require immediate and concerted action. The impact on democratic processes, public trust, and economic stability could be devastating, with consequences reverberating across the globe.
Organizations must recognize that the fight against shadow AI and data poisoning is not just an IT issue—it is a strategic imperative that demands attention at the highest levels of leadership. Implementing robust AI governance policies, investing in advanced detection and mitigation technologies, and fostering a culture of security and compliance are essential steps. The need for centralized oversight, rigorous data validation, and continuous monitoring cannot be overstated.
Moreover, the ethical and legal dimensions of AI usage must be
addressed head-on. Establishing clear accountability for AI systems, ensuring compliance with data protection regulations, and developing ethical guidelines for AI development are crucial for maintaining public trust and safeguarding privacy.
The path forward requires a global effort. International cooperation in developing and enforcing AI security standards, sharing best practices, and collaborating on threat intelligence is vital. The stakes are too high for a fragmented approach; a unified, proactive stance is necessary to mitigate these risks effectively.
As we look to the future, emerging technologies such as quantum computing and neuromorphic architectures offer promising avenues for enhancing AI security. However, these advancements must be pursued with a vigilant eye toward potential
new vulnerabilities. The journey towards artificial general intelligence (AGI) will only amplify these challenges, making it imperative to embed security and ethical considerations into the very fabric of AI research and development.
In conclusion, navigating the perils of shadow AI and data poisoning requires a multifaceted strategy that blends technological innovation with rigorous governance, ethical stewardship, and international collaboration.
The time to act is now—before the unseen threats of shadow AI and data poisoning erode the pillars of our interconnected world. By taking decisive steps today, we can safeguard the promise of AI and ensure it remains a force for good in our society.
Dr. Igor van Gemert is a leading expert in cybersecurity and disruptive technologies, with over 15 years of experience in IT and OT security. An alumnus of Singularity University, van Gemert is well-versed in the latest developments in emerging technologies and their practical
applications. Known for his startup expertise, he has successfully built numerous ventures and advised board members on innovation management and cybersecurity resilience. His ability to combine technical knowledge with business acumen makes him a soughtafter speaker, writer, and teacher. Van Gemert has made significant contributions to the field, including revolutionizing urban planning and sustainability through 'Sim CI' and establishing the OCHP protocol as the definitive standard in electric mobility. His work in artificial intelligence and virtual reality technologies, coupled with his entrepreneurial spirit, constantly pushes the boundaries of what is possible. With deep roots in artificial intelligence and a PhD in virtual reality technologies, his work truly brings the future to our doorstep.
Discover more about Igor van Gemert, the mastermind orchestrating the symphony of disruptive technologies. Enter his world, where intelligence, innovation, and technological disruption shape the future of humanity. He is the founder and owner of ResilientShield.nl and serves on the advisory board of Axite Security Tools.
Visual Concepts Explained
The Graphics included in this article visually represent the complexities of Shadow AI and Data Poisoning, each graphic is designed to highlight different aspects of these Cybersecurity Threats.
• Central AI Brain: Symbolizes the core of advanced AI systems, emphasizing their complexity and the vast amounts of data they process.
• Kneeling Robots: Represent shadow AI — unauthorized AI systems or tools used covertly within organizations. Their detailed mechanical design signifies the sophistication of these shadow technologies.
• Digital Overlay and Elements: The digital elements surrounding the AI brain and robots indicate the integration of cyber technology into AI systems, highlighting the potential for data manipulation and poisoning.
• High-Tech Background: Creates a dark, futuristic aesthetic, underscoring the hidden and potentially dangerous nature of these cybersecurity threats.
Editor
From the Cyberspace and Artificial Intelligence Research Laboratory and Space Technology Center Alvaro De Orleans of the Foundation for International Studies and Geopolitics (Fondazione di Studi Internazionali e Geopolitica), an Italian moral entity that promotes culture and science as tools for dialogue and peace. Based in Rome, it is chaired by Prof.
Giancarlo Elia Valori, Honorary Member of the French Academy of Sciences.
Beyond the developments in applied sciences on Earth for Earth (new energies, artificial intelligence, quantum computing, renewable sources, etc.), however, it must be said that the possibilities of “escape” from our planet at present are limited to the nearly 40 thousand kilometers per hour, touched in far 1969 by a human crew. While the record for the highest speed reached by a spacecraft belongs to NASA’s probe, Solar Parker Probe, which on April 29, 2021 reached 532 thousand kilometers per hour approaching the Sun, and with a capability of reaching a peak speed of 690 thousand kilometers per hour at perihelion (i.e., the point of it probe at the shortest distance from our star).
So if we were to calculate the aforementioned maximum speed of a manned carrier or spacecraft in the direction of Mars (i.e., a neighboring home, being an inner planet of the Solar System like Mercury, Venus and Earth), just to reach it would take nine months.
Let’s not mention it if we then consider the closest star to the Solar System: Proxima Centauri, which is 4.2 lightyears away. It is not the case now to discuss wormholes, warp speed, etc.; it is good to stay with our feet precisely on Earth, and analyze at what point vector studies are, since, let’s face it, the distances are immeasurable while our possibilities of going further, much further, are limited, if not ridiculous in comparison with what surrounds us.
For now we can only hope to reach a few asteroids. Back to the topic at hand. There are currently more than 35 thousand aerospace companies and 3.5 million employees in the world, with about 184 thousand new employees in 2023, covering the United States of America, Russia, the United Kingdom, PR of China, India, France, Japan, Germany, Canada. Italy with BPD (Bombrini Parodi-Delfino, now Avio) and Vega launchers, satellite launchers, has also opened the doors of space to our country, which has thus fully entered the very small club of the aforementioned countries that have autonomous access to the cosmos.
The aerospace industry as a whole shows stable technological growth and active investment, including satellite platform construction, space
biotechnology, space system network security, model rocket and spacecraft management, etc.
As human space exploration continues to grow, startups are creating viable solutions for space travel and traffic management, and even space junk and debris removal. Low Earth Orbit (LEO) satellites, as well as big data and analytics, also play a vital role in future space missions.
of private equity funds and various investments, aerospace startups are developing new technologies that simplify movement, operations and communications between Earth and space. The Aerospace Trend Report published by Startus-Insights analyzed 2,162 new aerospace startups worldwide and summarized the top ten emerging trends and technologies in aerospace.
The aerospace industry is using emerging technologies such as 5G, advanced satellite systems, 3D printing, big data, and quantum computing to expand and upgrade space operations, including weather forecasting, remote sensing, Global Positioning System (GPS) navigation, satellite television, and long-term navigation and remote communications.
Reliance on space infrastructure to provide services, smart propulsion (Smart Propulsion), space robots, and space traffic management are all new trends promoting the development of aerospace applications. With the influx
The ten new trends and innovations in space technology in 2024, are represented by:
1 – Small satellites. Small
satellites have become the dominant trend in aerospace technology. Miniaturization of satellites enables cost-effective designs, and advances in industrial technology have enabled mass production. Startups are developing small satellites to perform tasks previously reserved for larger satellites, such as wireless communication networks, scientific observations, data collection, and Earth monitoring using GPS. The value of the small satellite market in 2024 is estimated to be $166.4 billion and is expected to reach $260.56 billion in 2029, with a compound annual growth
rate of 9.38 percent, showing the growing demand and diversified applications of small satellites in space missions.
2 – Advanced space
manufacturing. Aerospace manufacturing uses cutting-edge technologies such as advanced robotics, 3D printing, and optical manufacturing to improve aerospace products and services. The focus of innovative technologies is on promoting large-scale space structures, reusable launch vehicles, space shuttles, and the development of advanced satellite sensors. Automation is extremely important for the aerospace industry’s long-term exploration missions, and the startups are committed to providing solutions designed specifically for the needs of the aerospace industry.
Momentus, a new startup based in the United States of America, uses reusable rockets equipped with robotic arms to perform short-range maneuvering, docking and refueling. It is very suitable for various space services in orbit and makes space transportation more convenient The startup Equatorial has developed a commercial suborbital rocket capable
of serving small payloads above the boundary between space and atmosphere.
3 – Advanced
communications. New space communications systems are a major trend in aerospace technology, and research and development is focused on advanced methods of transmitting and receiving data in space. The use of laser communication relay systems provides faster data rates and more secure communications than traditional radio frequency systems. Quantum Key Distribution (QKD) in space uses quantum mechanical principles to provide ultra-secure communication channels. In addition, the implementation of small and inexpensive CubeSats has been successfully used to improve space communications by enabling wider coverage and more efficient data transmission. Advances in advanced communications are changing the way humans communicate in space by providing faster, safer and more efficient methods. Polish startup Thorium has developed an ultra-flat, scalable active array antenna that improves system throughput and capacity by using relatively interference-free frequency bands
from Earth or space and combines electronic control and beam shaping capabilities. CommStar is another U.S. startup that produces Commstar-1, a satellite used for Earth-to-moon communications. It exceeds the speed limits of current space infrastructure and provides high-speed optical and radio-frequency relay functions. This technology benefits public and private space programs and can improve data services for lunar landers, resource extraction, and moon-terrestrial communications.
4 – Space traffic management.
As the number of satellites and space debris in Earth orbit continues to increase, ways to improve space traffic management have received increasing attention. The advanced satellite tracking system uses radar and optical sensors to actively monitor and predict potential collisions. The automatic collision avoidance system will automatically adjust the orbits of satellites based on algorithms. In addition, international regulatory frameworks are being developed to standardize space operations to ensure safe and sustainable use of space and prevent orbital congestion. ClearSpace is a spin-out company-that is, an
independent venture founded by entrepreneurs exiting a company in the same line of business from the Space Center of the Ecole Polytechnique Fédérale de Lausanne in Switzerland. This develops technology to remove unresponsive or obsolete satellites from space. ClearSpace’s small satellite solutions can repeatedly detect, capture and remove man-made space debris, with plans to remove the first among them from space by 2025.
5 – Intelligent propulsion.
Intelligent propulsion systems are a major trend in aerospace technology and provide innovative solutions for space travel. For example, there are electric propulsion systems that use electrical energy to accelerate the propellant at high speeds, and green propulsion systems that use environmentally friendly fuels such as hydrogen and oxygen. Among these, water propulsion, which uses water as a propellant, provides a safe and costeffective option. Another that has attracted attention for its efficiency and compactness are iodine-based propulsion systems, which are particularly suitable for small satellites. The global space propulsion market is expected to reach $18.1 billion by 2028, with a compound annual growth rate of
11.8 percent from 2023 to 2028, reflecting the growing demand for advanced and sustainable propulsion solutions in space missions. French startup ThrustMe offers an electric space propulsion system that uses iodine as a propellant, providing a lowcost propulsion alternative for large satellites. Dawn Aerospace, based in New Zealand and the Netherlands, produces reusable same-day launch vehicles and high-performance nontoxic propulsion systems for satellites of all sizes.
network of nodes that uniquely and securely manages a public ledger composed of a variety of data and information, without the need for central control), communication and data exchange between spacecraft, ground stations and control centers can be protected and simplified, ensuring reliable and tamper-proof operations in space.
6 – Space asset management. Due to the increase in the number of space missions, there is a need to carry out effective coordination among various such missions and activities. Therefore, some new startups are providing space activity management solutions to jointly improve efficiency and safety. Among them, the development of advanced mission control software can realize real-time monitoring and management of spacecraft and satellites; the use of artificial intelligence analysis can predict and reduce potential orbital conflicts as well as improve the safety of space operations. In addition, through the integration of blockchain (a computer
U.S.-based startup Continuum provides a cloud-based platform for space mission lifecycle management that can provide high-fidelity simulations for satellite deployment and operation and support missions around the Earth, Moon and other planetary bodies. Canadian startup Obruta Space Solutions has developed equipment that can provide services for new satellites in orbit. It extends the useful life of satellites through refueling services and upgrades. In addition to extending the useful life of satellites, it also enables their future removal so that humans can permanently occupy the orbital environment.
7 – Space missions. Space exploration solves fundamental questions about the history of the universe and the solar system, and humans have found opportunities in space to promote mining, materials
science, and Earth and alien life science research. The development of reusable rockets has greatly reduced the cost of space exploration and increased the frequency of missions. The deployment of small, relatively lowcost satellites helps in tasks ranging from Earth observation to deep space exploration. In addition, the development of interstellar spacecraft can facilitate missions beyond Earth orbit. For example, U.S.-based startup Lunar Station has developed a technology platform that can convert lunar sensory datasets into 3D visualizations of that satellite’s environmental conditions, which help expand the scope and capabilities of exploration itself.
8 – Space mining. Celestial
mining is turning from science fiction to reality. Robotic mining equipment designed for extreme space environments can drill and extract resources independently. Another important development is the use of spacecraft equipped with advanced sensors and artificial intelligence that can be used to identify and analyze resource-rich asteroids. In addition, startups are developing In Situ Resource Utilization technology that can process materials in space,
reducing the need to transport resources to Earth. Advances in space mining technology will pave the way for sustainable resources beyond Earth. British startup Asteroid Mining Corporation has developed a satellite to explore Near-Earth Asteroids (NEOs) as mining candidates. The company offers a range of spacecraft for prospecting, exploration and mining, each capable of carrying out specific missions and guiding prospectors to specific mining candidates.
9 – Low Earth orbit satellites.
Low Earth Orbit is relatively close to the Earth’s surface, usually at an altitude of less than a thousand kilometers. Low Earth Orbit (LEO) satellites do not always follow a specific path around the Earth, which means that satellites in low Earth orbit have multiple paths. To this end, new startups have developed solutions and technologies to address the challenges associated with Low Earth Orbit. For example, advanced communication systems designed for low Earth orbit satellites focus on improving signal strength and reducing delays to ensure reliable data transmission. In addition, new startups are also developing technology to monitor the technical state of wear and tear on satellites,
using advanced diagnostics and predictive maintenance to track and maintain the operational status of satellites in low Earth orbit. Japanese startup Warpspace already provides LEO optical communication services to satellite operators through optical data transmission networks in Medium Earth Orbit (MEO) starting in 2023. The network uses optical links to communicate with satellites in low Earth orbit, and users will only need small optical transceivers provided by the startup for reception.
10 – Space data.
As various satellites are widely used in communications and Earth monitoring, this information must be processed, analyzed, and managed. Space technology startups use artificial intelligence to analyze satellite data to interpret large amounts of information from space more quickly and accurately. Startups also use the aforementioned blockchain technology to ensure that data transmission is secure and tamper-proof and to
improve the reliability of communications between satellites and Earth stations. In addition, big data analytics is also used to manage and process the huge amount of data collected by satellites, promoting efficient data storage, retrieval, and utilization for various applications in the space domain. U.S. startup LeoLabs uses its orbital products and Phased Array radar to provide precise tracking and monitoring services for satellite data. LeoLabs also tracks satellites and space debris in real time, providing data via ephemeris to quickly locate and identify the last payloads in low Earth orbit.
As we have seen, the projects and achievements are cutting edge, however, other systems where mankind can spread before the inevitable collapse of our sun are still a long way off.
Giancarlo Elia Valori
Recognizing Distinguished
Contributors and the Foundation
Giancarlo Elia Valori
Giancarlo Elia Valori is a highly esteemed figure in cybersecurity, disruptive technologies, and international relations. As the President of the International World Group and an Honorary Professor at Peking University, Valori has significantly contributed to the advancement of economics and international politics. His extensive career is marked by numerous accolades, including an honorary membership at the French Academy of Sciences and the Sir Moses Montefiore Prize. Valori's work bridges gaps between nations, fostering innovation and dialogue on a global scale.
Lamberto Dini
Senator Lamberto Dini, the Honorary President of the Laboratory on Cyberspace and Artificial Intelligence, brings a wealth of experience as a former Prime Minister and Minister of Foreign Affairs of Italy. Dini's distinguished career in both national and international finance and politics has earned him international recognition, including an honorary Knighthood Grand Cross from the United Kingdom and the Grand Cordon
of the Order of the Rising Sun from Japan. His contributions to economic policy and international relations have left an enduring impact.
Foundation for International Studies and Geopolitics (Fondazione di Studi Internazionali e Geopolitica)
Within the framework of its activities, the Foundation for International Studies and Geopolitics has established the Laboratorio di Studi sul Cyberspazio e Intelligenza
Artificiale, a laboratory for studies and research of high scientific content on the topic of Cyberspace and Artificial Intelligence. The Honorary President of the Laboratory on Cyberspace and Artificial Intelligence is Senator Lamberto Dini, former Prime Minister and Minister of Foreign Affairs of Italy.