AI Technology Usage in Weapons Systems

Whoever becomes the leader in [artificial intelligence] will become the ruler of the world” – President Vladimir Putin 2018

The ever-changing dynamic of international security is adapting quickly to new developments in technology. The nuclear bomb, space travel, and chemical weapons drastically changed the state of warfare globally. Now, artificial intelligence is transforming the way we use and conceptualize weaponry and global militaries; just one of the products of the digital age in the 21st century. 

Presently, global leaders are automizing weapons systems to both advance antiquated weapon technology and thwart new security threats in the age of globalization and digitalization. The quote above by Russian president Vladimir Putin illustrates the cyberspace-race nature of automized weapons systems and the importance of keeping in touch with the future of automized technology. Current leaders in automized weaponry include China, the United States, Russia, and Israel. 

Key terms and Definitions

Artificial intelligence, commonly known as AI, refers to the way computers process large volumes of information, recognize patterns in that data, and make predictions based on the information given. Though the word “intelligence” gives way to many social connotations, it is important to recognize that artificial intelligence is simply a way of synthesizing large amounts of data. Much like traditional forms of data synthesis, data sets limit the prediction output, leaving room for faulty or poorly-informed predictions. 

Automation refers to the process of making systems function automatically, or without human intervention. Automation varies in range from no autonomy, partial autonomy, and full autonomy. 

Artificial intelligence plays a key role in automation as the predictive nature of artificial intelligence allows machines to interpret information as they function, leading to more efficient autonomy and less reliance on human intervention. 

Implementations of an AI-automated Military

AI development is contentious among U.S decision makers, with questions around the ethical and moral implications of AI, or a fully autonomous weapon This has significantly stunted growth in the AI defense sector, causing political analysts to caution against slow development. 

The hesitation to implement AI-automation in the military has some merits. There is significant skepticism about the reliability of this technology. Gaps in data sets threaten output dependability. Additionally, blind trust in AI undermines the importance of human rationality. Human rationale generally prevails in wartime decision-making due to war’s complexities. For example, in 1983, a Russian officer, when presented with a warning that the U.S had launched a nuclear attack, decided not to move forward with retaliation. The warning was a computer malfunction, and the Russian officer ultimately saved the world from nuclear disaster. 

However, AI-powered weapons can significantly change the state of combat. Having semi or fully autonomous weapons decreases armed casualties and the need for a large standing army. Furthermore, global actors like China and Russia are placing significant emphasis on AI weapons proliferation, threatening U.S security. 

The government has been slow to utilize more sophisticated AI-driven weaponry. This led to the 2021 conclusion by the National Security Commission that the U.S is ill-prepared for future AI combat. To address this, President Biden appointed Margaret Palmieri as the new deputy chief digital and artificial intelligence officer to spearhead the movement toward AI defense systems. Additionally, the administration created the National Artificial Intelligence Resource Research Task Force which focuses on increasing access to this technology to promote innovation and incentivize engineers, researchers, and data scientists to join the defense sector. However, this comes with limitations. The United States will need to ensure access to more defense data, especially data held by private companies. Additionally, incentivizing data talent is an obstacle as STEM talent flocks to start-ups and private companies due to the promise of money and unregulated research. 

In 2020, the United States defense sector set aside $4 billion for AI research and automated technology. This number is trivial compared to overall defense spending, which was $100 billion in 2020 for general weapons research and development. However, it is important to keep in mind that the cost of AI technologies is decreasing rapidly as hardware becomes more affordable in the private sphere.

Weapons Automation Among Global Actors

France is developing an ethics committee to oversee the development of novel automated weapons systems. This committee will work with the Ministry of Defense to ensure the systems implemented are reliable and safe. Similarly, Germany is taking a multilateral approach to AI integration. The government is dedicated to seeing AI technology used ethically and sustainably in private and military sectors. 

Israel currently leads the Western world in technological development and has a symbiotic relationship with the United States in terms of weapons development, research, and funding. The most notable achievement in Israeli defense is the Iron Dome missile defense system. This automated system immediately detects and shoots down adversary artillery to prevent threats to civilians. This system operates with little human oversight, meaning there is no chain of command for initiating the defense response. 

China holds much of the world’s attention on automated weapons systems. The majority of China’s AI systems are used in the surveillance sector to monitor movement and social activity through biometric data. In terms of automated defense systems, China is currently developing new forms of weaponry as opposed to automating existing technology. Most notable is the implementation of swarming technology, or the use of many small, fully autonomous drones to surround enemies. 

Russia, with Chinese aid, is currently developing AI weaponry to bolster security ambitions. This technology, however, is largely absent from the current conflict in Ukraine where forces are using traditional war tactics. Instead, a large portion of Russian antagony has consisted of deepfake media and misinformation campaigns. For instance, during the lead-up to the 2016 U.S Presidential elections, Russia made use of troll farms on Facebook to sow discord in key swing districts. Furthermore, Russia used similar tactics to foster pro-Russian sentiment in eastern parts of Ukraine to bolster rebel forces in Donbas. While misinformation is certainly a war tactic, these actions stray away from typical AI-powered weaponry. 

More on Global Actors and AI-Weaponry:

More on the Implications of AI Automation

Loading

Share this post

Give feedback on this brief: