Over the past few weeks, social media seems to be flooded with “tips and tricks” on how individuals can utilize Artificial Intelligence (AI) tools like ChatGPT to become more productive and efficient. Commentators have already labelled AI as the most beneficial yet disruptive emerging technology of the contemporary era and one with the potential to revolutionize every aspect of human civilization.
However, other AI applications have the potential to cause much more harm than good. One such domain is the use of AI in the theatre of war. This article explores how the use of AI on the battlefield has evolved over the years to understand how it is destined to alter the conduct of both conventional and non-conventional warfare.
AI and the Reconceptualization of Hyperwars
In terms of military applications for AI, the complexity surrounding human-machine control and levels of autonomy debate has led to the reconceptualization of ‘hyperwar’. The term hyperwar was used during the Second World War to define a military conflict characterized by global scope and simultaneous active battlegrounds. In the modern AI-supported battlefield, hyperwar is redefined as a conflict that has been automated to such an extent that humans are excluded from the observe-orient-decide-act (OODA) loop. Originally proposed by an American military theorist John R. Boyd, the OODA loop is based on the primary mechanism of human action, and it can be applied to all domains and types of war. There are various micro-OODA loops within a macro-OODA loop at each level and type of war. In any conflict, the side that completes the cycle earlier than its opponent ends up as the victor. The question then arises as to whether AI can effectivity and accurately execute each stage of the OODA loop.
Reimagining Conventional Warfare
AI has limitless possibilities in macro and micro-OODA loops. It can collect vast amounts of data, identify patterns, analyze information and predict scenarios faster than humans. However, the accuracy of these systems is still not up to the standards that would be suitable for the battlefield. The first wave of AI systems relied on predefined algorithms and existing databases. Therefore, they could only facilitate the OODA loop in a narrow domain. Unable to learn from the environment, they always needed humans in the loop. The second wave of AI systems can collect data from the environment, training, operationalization, feedback, and supportive learning without requiring humans in the loop. However, incorrect or negative data collection over a period of time could cause the system to behave unexpectedly and take incorrect decisions. The third wave AI systems, currently in the research and development phase, would have human-like intuitive capabilities that could lead to greater accuracy. Not only will this allow militaries to take accurate and innovative decisions and fight wars at machine speed, but it will also enable them to achieve military objectives faster and without fewer casualties. However, it will also inculcate uncertainty and fear of defeat at machine speed on the other side. This fear and uncertainty could translate into the pressure of delegating the decision-making to an AI system to stay inside the opponent’s OODA loop and avoid making decisions for a situation that no longer exists.
Militaries around the globe are working to make future wars not only about the “shortening the kill chain” through the speed and accuracy of automated systems but also about “decision making at the speed of relevance.” It means collecting, analyzing and distributing data with greater integration and precision. In addition to decision-making, AI could play a significant role in developing a centralized and integrated system where all types of data from all constituents of warfare, e.g., tanks, missile systems, missile defense systems, troops, cyber infrastructure, space-based assets, warships, submarines, unmanned vehicles on land, in air and underwater, etc. could be collected and analyzed for better intelligence, surveillance and reconnaissance (ISR) capabilities at all levels of war in all domains and peacetime planning. These systems would be highly classified and developed nationally. Currently, the prominent examples in the public domain are US DART (1991) and NATO’s STARTLE Intelligent Sensors & Unmanned System (2016).
AI and Nuclear Weapons
Where would nuclear weapons and deterrence stand in this equation of future hyper wars and AI-enabled integrated and multidomain systems? As stated above, the pressure of winning a conflict at machine speed and staying inside the opponent’s OODA loop could result in hyperwar. AI might play a decisive role in winning a conventional war between non-nuclear states. However, automated decision-making and machine speed could quickly escalate conflict up to the nuclear threshold between nuclear powers. This would also narrow the window for signalling, backdoor diplomacy and mediation for conflict de-escalation. Furthermore, AI would enable a spiral of short, fast and limited wars under the nuclear threshold, generating a stability-instability paradox.
AI is also likely to have an impact on nuclear deterrence. AI-enabled missile defenses could be effective against hypersonic missiles. According to popular belief, the massive proliferation of advanced AI-enabled quantum sensors would make the oceans transparent, decreasing the effectiveness and credibility of sea-based second-strike capability. However, serious challenges include high financial cost, short range of sensors, data pollution in the maritime domain, and lack of attribution mechanisms. Furthermore, states find integrating AI with the existing nuclear postures and military doctrines challenging due to the unpredictability, biases and fragility of existing systems.
The Challenges of military applications of AI
As discussed earlier, incorrect or negative data collection over a period of time would result in wrong decisions and performing unexpected actions. One such application is automated terrorism by non-state actors. The post-9/11 events made the term “drone strike” a common phenomenon. Previously, drone technology was state-controlled while keeping humans in the loop. However, the dual-use nature of this technology resulted in massive commercialization and integration with AI and other strategic technologies, making these drones more autonomous and undetectable than ever. These autonomous vehicles could be programmed by non-state actors for assassinations, murders, urban bombings and suicide bombing/kamikaze drones resulting in a new wave of tech-enabled-terrorism or automated hybrid/greyzone warfare. In the last decade alone, autonomous vehicles have been used in more than 70 terrorist attacks in different capacities.
To conclude, AI will likely alter the fundamental conduct and processes of warfare. From hyper, conventional, hybrid, greyzone to nuclear, the future wars would be fast, automated, innovative, complex, unpredictable, multi-dimensional, cross-domain, data and tech-intensive. Militaries would achieve their objectives in non-kinetic ways without losing expensive systems and human resources. AI-enabled systems would put significant pressure on the effectiveness and credibility of nuclear deterrence. States are struggling to integrate AI technologies into their military doctrines and strategies, and there is a long way to achieve maturity. Once achieved, there will be no way back from the hyperwars and point of singularity.
The opinions expressed in the articles on the Diplomacy, Law & Policy (DLP) Forum are those of the authors. They do not purport to reflect the opinions or views of the DLP Forum, its editorial team, or its affiliated organizations. Moreover, the articles are based upon information the authors consider reliable, but neither the DLP Forum nor its affiliates warrant its completeness or accuracy, and it should not be relied upon as such.
The DLP Forum hereby disclaims any and all liability to any party for any direct, indirect, implied, punitive, special, incidental or other consequential damages arising directly or indirectly from any use of its content, which is provided as is, and without warranties.
The articles may contain links to other websites or content belonging to or originating from third parties or links to websites and features in banners or other advertising. Such external links are not investigated, monitored, or checked for accuracy, adequacy, validity, reliability, availability or completeness by us and we do not warrant, endorse, guarantee, or assume responsibility for the accuracy or reliability of this information.
Ms Aamna Rafiq is a Research Associate in the Arms Control & Disarmament Centre (ACDC) at the Institute of Strategic Studies Islamabad (ISSI). Her research interests include militarisation of emerging technologies and their impact on future of warfare and strategic stability in South Asia, global regulatory regime for emerging technologies with special focus on artificial intelligence, cyber and quantum technologies.
Previously, she has worked with the Arms Control and Disarmament Affairs (ACDA) Branch of the Strategic Plans Division (SPD), Rawalpindi and Pakistan Institute for Parliamentary Services (PIPS), Islamabad.