Editor Note
Part I saw the phenomenal capabilities of artificial intelligence (AI) for improving humanity, but concurrently, its multi-domain destructive potential, with a focus on lethal autonomous weapon (LAW) systems. AI demands incorporation and compels regulation, but this very characteristic dictates the global geopolitical necessity to master it at any cost.
The AI Tsunami
Nuclear and Biological Warfare can cause Armageddon, Artificial Intelligence and autonomous weapons will ensure it. – Lt Gen PR Kumar (Retd)
Moore’s law states that computing power doubles every two years, an exponential increase in what computers can do, but AI has surpassed it at lower costs and easier accessibility, following a never seen before ascending curve to improve all facets of technology, domains, and management (autonomous decision making). Its transformative implications on technology, national power, and the world economy will ensure its irreversible proliferation globally. The upsides of AI are colossal, but its downside and doomsday apprehensions cannot be ignored.
Focus on Downsides to Establish Necessity of Regulation/Containment
AI has already unleashed new dangers. One impactful danger almost impossible to detect is disinformation/misinformation, which has major geopolitical, economic, and security implications. Deep fakes can paralyse processes/administrations, disrupt the global economy, create confrontation within and between nations, and start wars. Cyber-attacks in every field are ubiquitous; disruption of strategic communications and the command-and-control loop of the nuclear ecosystem can unleash Armageddon. The USA has already stated that disruption of its strategic nuclear systems will be considered an act of war. The temptation for the diabolic use of AI in medicine, biotechnology and biological warfare cannot be overstated. As the power of AI grows, so will its accessibility to every level/facet of society, including autocrats/dictators and even terrorist groups.
Geo-Politics into the World of AI
Many AI experts fear that without robust regulations, AI could be used to develop new diseases, or cyber-weapons, and advance to the point where humans can no longer control it, with potentially apocalyptic consequences. Compounding the issue is an unstable world order, and escalating confrontation between two superpowers and tech giants USA and China, keen to harness all the payoffs of AI specially in field of security and military. The global semiconductor and chip war, is a worthy example of geopolitics impacting AI. No nation wants to cede control of AI.
The Positive News: Countries and Activists Are Seeking Regulation, But…
A 2023 survey report on AI by experts found that 36 per cent fear that AI development may result in a “nuclear-level catastrophe.” Over 30,000 people have signed on to an ‘Open letter written by the Future of Life Institute, including Steve Wozniak, Elon Musk, CEOs of several AI companies and many other prominent technologists, asking for a six-month pause or a moratorium on new advanced AI development. In November 2023, several countries issued a joint communiqué promising strong international cooperation in reckoning with the challenges of AI. Startlingly for states often at odds on regulatory matters, China, the USA, and the EU signed the document, offering a sensible, wide-ranging view on addressing the risks of “frontier” AI. US President Joe Biden’s October 2023 executive order on AI, the EU’s AI Act, which the European Parliament passed in December 2023, and China’s recent regulations showcases a surprising degree of convergence.
However, this apparent convergence to create a new global governance regime for AI meets a resolute and impregnable obstacle: ground realities and geopolitics for total control of AI by all major players. Restriction and export controls on frontier AI technology (hardware and software) by the USA, followed by China, is one prime example, despite the move harming both and impeding the growth of AI for humanity. Another less-known but vital area is the standardisation of the digital and technical ecosystems connected to AI. It is like having multiple standards for measuring weight and temperature across the globe; the iPhone 13, for example, has nearly 200 parts sourced from more than a dozen countries and needs one specific standard.
German industrialist Werner von Siemens said in the late 1800s, “He who owns the standards owns the market.”
Move to Regulate/Contain AI
Currently, a few little-known bodies, such as the International Telecommunication Union, the International Electrotechnical Commission, the International Organization for Standardization, and the Internet Engineering Task Force, negotiate technical standards for digital technology. These bodies play a major role in setting the terms of global digital trade and competition. Members follow majority rule. While dominated by US and European officials and firms, China has rapidly emerged as a leader with lots of international support. Since 2015, it has integrated its technical standards into its Belt and Road Initiative projects. In March 2018, China launched “China Standard 2035,” calling for an even stronger Chinese role in international standard setting. By 2019, it had reached 89 standardization agreements with 39 countries and regions. However, divisions over AI-related technical standards have emerged. Unfortunately, China, Russia and the USA have staunchly opposed a treaty banning autonomous weapons, arguing that existing rules in the law of war are sufficient to address any potential harms! Nation-specific legal regimes regarding AI, data holding/transfers/sharing, disclosure of algorithms, etc., as every nation wants to keep its data secure, will also frustrate broad-based, collective solutions.
Regarding digital regulation, the USA is following a market-driven approach, China is advancing a state-driven approach, and the EU is pursuing a rights-driven approach. These three “digital empires” compete for control over the future of AI and attempt to expand spheres of influence in the digital world as other countries look for guidance on AI legislation.
Road Ahead for Lethal Autonomous Weapons (LAW)
Nations could adopt a broad legal principle establishing the minimum necessary human involvement in lethal decision-making. It could be adopted through the UN Convention on Certain Conventional Weapons (CCW) or the General Assembly. Major powers like the USA and China could self-regulate, persuading others to follow suit. ‘Human in the loop’ must be mandatory when it comes to nuclear weapons and must be legislated by all nuclear weapon states (NWS).
Overview of Application of AI in India’s National Security
With unsettled borders and confrontationist neighbours, India is already engaged in 24/7 multi-domain operations to protect its integrity and sovereignty. India is considered an IT powerhouse, but AI-driven defence applications are much slower in implementation. Only if all domains are AI-enabled can wars be won. The NITI Aayog and the Ministry of Defence (MoD) set up a task force in 2018 to strategically implement AI for national security and defence. The Committee’s report addressed defence manufacturing, included proposals, and identified military challenges and funding for innovative solutions incorporating AI. All three services have established their ‘AI Centres’. The MoD has committed an annual budget of INR 100 crore (US$12 million) for the next five years (starting in 2023) to the Defence Artificial Intelligence Project Agency (DAIPA). In 2022, the government notified a list of 75 priority projects related to using AI for defence, focused on data processing and analysis, cyber security, simulation and autonomous systems, particularly drones. AI applications for underwater domain awareness, border security, and counter-insurgency operations, apart from incorporating AI solutions from civilian space programs having indirect defence applications, are also a priority.
Post Script
The latest Foreign Affairs article recommends that America must let LAWS operate more freely in war, which can make ethical choices on its own (whatever that means)! This recommendation is fraught with danger at two levels: USA has already decided that AI CANNOT be regulated/contained; second, it stays on top of the loop, thereby starting the global race for LAWS (go the nuclear domain route).
Lt Gen PR Kumar (Retd)