Editor’s Note
AI’s growing induction in modern warfare throws open varied scenarios that are highly risk-prone. With the man in the loop becoming more and more redundant, the risk factor becomes more and more acute. Whether it’s drone swarms or semi-automated loops for nuclear retaliation, the chances of a catastrophe for mankind are on the rise. The author analyses the issues in a two-part article, this being Part 1.
Artificial Intelligence and Autonomous Weapon Systems
Nuclear and Biological Warfare can cause Armageddon, Artificial Intelligence leading to Autonomous Weapons will ensure it.
– Lt Gen PR Kumar (Retd)
Just as mobile phones and the internet have become ubiquitous, so has Artificial Intelligence (AI). Innocuously, it has permeated our lives, and we can no longer do without it. So, what is AI?
“Artificial intelligence (AI) is a set of technologies that enable computers to perform tasks typically done by humans, such as learning, reasoning, and problem-solving. Interestingly, the idea of AI goes back thousands of years, with ancient inventors creating mechanical devices called “automatons”. AI uses logic and math to simulate human reasoning, allowing computers to process new information quickly and accurately.
AI and Machine Learning (ML) are generally always linked. While AI uses computer software to mimic human cognition, ML is a subset of AI that uses algorithms to perform complex tasks. Neural networks are a core component of AI technologies. They mimic information processing in the human brain, using artificial neurons to process information and solve problems”.
Nuclear and Biological domains provide tremendous and extraordinary processes/tools for human progress but concurrently can spell doom for humanity if misused; AI could well be the ‘Mother of All Domains’ for human progress as also for human destruction/extinction, unless ‘Man’ regulates its growth by common consensus. Alas, while intentions can be noble, AI’s extraordinary capabilities and capacities will ensure that nations, groups and individuals seek to master it to dominate others at the cost of international security and stability. This two-part paper overviews AI, automated weapon systems and how geopolitics impact its future growth. Part I focuses on autonomous weapon systems, while Part II elaborates on challenges to regulate/contain them and the role of geopolitics.
Stephen Bubeck, the Head of AI at Microsoft, who recently quit and joined Open AI, and Geoffrey Hinton, who quit Google (to focus on warning the world about AI) and is known as “the Godfather of AI”, have cautioned the world that “AI systems can get smarter than people”. AI-powered new large-scale AI models like GPT-4 dramatically improve reasoning, problem-solving, and language capabilities.
Currently, AI can automate specific tasks, which is rapidly expanding in scope and raising significant concerns. Just in the last five years, engineers, scholars, whistle-blowers and journalists have documented cases in which AI systems, composed of software and algorithms, have caused or contributed to serious harm to humans; algorithms recommending unfair advice in the criminal justice system, like racism and denying parole; steering toxic content in social media to teenagers; realistic rumours causing widespread paralysis and even causalities; corrupting sensitive logistics/ financial/management systems causing losses of billions of dollars; alarming rise in large-scale and almost impossible to detect cyber-crimes; and not to forget autonomous weapon systems which can kill/neutralise humans without moral and ethical considerations (recent wars in Ukraine, Gaza and Azerbaijan-Armenia provide ample evidence).
Shockingly, scientists and researchers often cannot understand how these algorithms, based on opaque equations involving billions of calculations, achieve their outcomes!
Overview of Autonomous Weapon System
Since September 2023, Ukraine has employed the ‘Saker Scout’ unmanned aerial system (UAS) with an operational range of 12 km. It incorporates several aerial vehicles with software built upon AI algorithms. Ukrainian developers have confirmed that these drones are carrying out autonomous strikes on 64 different types of Russian military objects without a human operator. Reconnaissance missions identify enemy military objects, even those camouflaged, locate their coordinates, and transmit the data to the command centre. Once acquired, the AI-powered recon UAV designates them as targets and guides explosive FPV drones as part of the Saker Scout system. It implies that Saker is a fully autonomous weapon that cannot be jammed and is designed to search, locate, select, designate, and destroy.
At least 30 countries operate air and missile defence systems and anti-rocket protection systems for ground vehicles in autonomous modes. They automatically sense incoming rockets, artillery, mortars, missiles, or aircraft and intercept them. Currently, humans supervise their operations and can intervene if something goes wrong. Autonomous systems/drones are invaluable as they do not rely on vulnerable communication links; they must be found and destroyed. Larger, medium/high-altitude drones have been used to reach deeper behind enemy lines to target radars and installations. Ukraine has even used drone boats to attack the Russian Black Sea Fleet.
Conflicts are the breeding ground for innovating better, more complex autonomous systems. The USA, UK, China, France, India, and Russia are currently working on stealth combat drones. While lagging behind their air and sea counterparts, ground systems are seeing robot soldiers and robots on fixed gun emplacements deployed, showcased by China, with reports of their deployment along the India-China Line of Actual Control (LAC) emerging. Swarms of drones can autonomously coordinate their behaviour, reacting to changes on the battlefield at a speed beyond human capabilities. It will accelerate the tempo of the conflict zone (not just the tactical area), putting intense pressure on the human decision-making loop, thereby further increasing dependency on autonomous decision-making and heralding the emerging era of machine-driven warfare. Even ‘principles of war’ and its conduct would need constant review; step up to an already complex multi-domain conflict environment.
The most worrying and compelling issues of integrating AI involve the entire nuclear ecosystem: warheads, strategic communication, ISR, delivery systems, and command and control. In 2022, the US, followed by the UK, declared that they would always retain a “human ‘in the loop’” for decisions to use nuclear weapons. Russia and China have not officially stated so, though, during the recent Biden-Xi meeting in 2024, Xi appeared to commit to ensuring the human in the autonomous loop.
The Soviet Union had built a semiautomated retaliatory nuclear strike system called “Perimeter.” Once activated, it would use a series of automated sensors to detect a nuclear attack on Soviet soil. If one was detected and there was no response from the country’s leaders, presumably because they had been killed in the attack, the system would automatically transfer nuclear launch authority to a relatively junior officer in a secure bunker.
Russian officials stated in 2018 that the system is still operational and has even been upgraded. More recently, Moscow has begun to develop a nuclear-armed autonomous underwater drone. In the context of China-India-Pakistan, disputed borders and distrust, geographic proximity, hypersonic and multiple warhead missile systems, co-location of conventional and nuclear weapons, and very short decision times will hasten the propensity for adopting autonomous systems causing further instability and potential for Armageddon.
Future conflict scenarios beckon an escalating spiral of greater automation and less human control, resulting in war executed at machine speed and beyond human control. Additionally, as they offer a low-entry barrier, AI-enabled automated systems can easily be accessed, manufactured, and employed by terrorists and non-state actors, further exacerbating the dangers.
(In Part II, this piece will explore the impact of AI on geo-politics and the necessity of regulating or containing its growth.)
Lt Gen PR Kumar (Retd)