The Weaponization of Artificial Intelligence

Weaponized artificial intelligence is almost here. As algorithms begin to change warfare, the rise of the autonomous weapons system is becoming a...

                
· 4 min read >

Weaponized artificial intelligence is almost here. As algorithms begin to change warfare, the rise of the autonomous weapons system is becoming a terrifying reality.

Introduction

Technological development has become a rat race. In the competition to lead the emerging technology race and the futuristic warfare battleground, artificial intelligence (AI) is rapidly becoming the center of a global power play. As seen across many nations, the development of autonomous weapons systems (AWS) is progressing rapidly, and this increase in the weaponization of artificial intelligence seems to have become a highly destabilizing development. It brings complex security challenges for not only each nation’s decision-makers but also the future of humanity.

The reality today is that artificial intelligence is leading us toward a new algorithmic warfare battlefield that has no boundaries or borders, may or may not have humans involved, and will be impossible to understand and perhaps control across the human ecosystem in cyberspace, geospace, and space (CGS). As a result, the very idea of the weaponization of artificial intelligence, where a weapon system that, once activated across CGS, can select and engage human and non-human targets without further intervention by a human designer or operator, is causing great fear.

The thought of an intelligent machine or machine intelligence to have the ability to perform any projected warfare task without any human involvement and intervention — using only the interaction of its embedded sensors, computer programming, and algorithms in the human environment and ecosystem — is becoming a reality that cannot be ignored anymore.

The weaponization of Artificial Intelligence

As AI, machine learning, and deep learning evolve further and moves from concept to commercialization, the rapid acceleration in computing power, memory, big data, and high-speed communication is not only creating innovation, investment, and application frenzy but also intensifying the quest for AI chips. This ongoing rapid progress and development signify that artificial intelligence is on its way to revolutionizing warfare and that nations are undoubtedly going to continue to develop the automated weapons system that AI will make possible.

When nations individually and collectively accelerate their efforts to gain a competitive advantage in science and technology, the further weaponization of AI is inevitable. Accordingly, there is a need to imagine what would an algorithmic war of tomorrow look like, because building autonomous weapons systems is one thing but using them in algorithmic warfare with other nations and against other humans is another.

As reports are already emerging of complex algorithmic systems supporting more and more aspects of war-fighting across CGS, the truth is that the commoditization of AI is a reality now. As seen in cyberspace, automated warfare (cyberwarfare) has already begun — where anyone and everyone is a target. So, what is next, geo-warfare and space-warfare? Moreover, who and what will be the target?

The rapid development of AI weaponization is evident across the board: navigating and utilizing unmanned naval, aerial, and terrain vehicles, producing collateral-damage estimations, deploying “fire-and-forget” missile systems, and using stationary systems to automate everything from personnel systems and equipment maintenance to the deployment of surveillance drones, robots and more are all examples. So, when algorithms are supporting more and more aspects of war, it brings us to an important question: what uses of AI in today and tomorrow’s battle should be allowed, restricted, and outright banned?

While Autonomous Weapons Systems are believed to provide opportunities for reducing the operating costs of the weapons system — specifically through more efficient use of manpower — and will likely enable weapon systems to achieve higher speed, accuracy, persistence, precision, reach and coordination on the CGS battlefield, the need to understand and evaluate the technological, legal, economic, societal and security issues remain.

Role of Programmers and Programming

Amidst these complex security challenges and the sea of unknowns coming our way, what remains fundamental for the safety and security of the human race is the role of programmers and programming along with the integrity of semiconductor chips. The reason behind this is that programmers can define and determine the nature of AWS (at least in the beginning) until AI begins to program itself.

However, if and when a programmer who intentionally or accidentally programs an autonomous weapon to operate in violation of the current and future international humanitarian law (IHL), how will humans control the weaponization of AI? Moreover, because AWS is centered on software, where should the responsibility for errors and the manipulation of AWS systems design and use lie? That brings us to the heart of the question — when and if an autonomous system kills, who is responsible for the killing, irrespective of whether it is justified or not?

Cyber-Security Challenges

In short, algorithms are by no means secure—and nor are they immune to bugs, malware, biases, or manipulation. Moreover, since machine learning uses machines to train other computers, what happens if there is malware or manipulation of the training data? While security risks are everywhere, connected devices increase the ability of cybersecurity breaches from remote locations, and because the code is opaque, security is very complex. So, when AI goes to war with other AI (irrespective of that is for cyber-security, geo-security, or space-security), the ongoing cybersecurity challenges will add significant risks to the future of humanity and the human ecosystem in CGS.

While it seems autonomous weapons systems are here to stay, the question we all individually and collectively need to answer is will artificial intelligence drive and determine our strategy for human survival and security, or will we?

Acknowledging this emerging reality, Risk Group initiated the much-needed discussion on autonomous weapons systems with Markus Wagner, a Published Author and Associate Professor of Law at the University of Wollongong based in Australia.

Disclosure: Risk Group LLC is my company

Risk Group discusses Autonomous Weapons System and Law with Markus Wagner, a Published Author and Associate Professor of Law at the University of Wollongong based in Australia on Risk Roundup.

What Next?

As nations individually and collectively accelerate their efforts to gain a competitive advantage in science and technology, further weaponization of AI is inevitable. As a result, the positioning of AWS would alter the very meaning to be human and will directly change the very fundamentals of security and the future of humanity and peace.

It is essential to understand and evaluate if the autonomous arms race cannot be prevented, what could go wrong. It is time to acknowledge the fact that merely because technology may allow for the successful development of AWS does not mean that we should. It is perhaps not in the interest of humanity to weaponize artificial intelligence! It is time we take a pause!

About the Author

Jayshree Pandya (née Bhatt), Founder and CEO of Risk Group LLC, is a scientist, a visionary, an expert in disruptive technologies and a globally recognized thought leader and influencer. She is actively engaged in driving the global discussions on existing and emerging technologies, technology transformation, and nation preparedness.

NEVER MISS ANY OF DR. PANDYA’S POSTS

Just join here for a weekly update

Copyright Risk Group LLC. All Rights Reserved

Written by Risk Group
Risk Group LLC, a leading strategic security risk research and reporting organization, is a private organization committed to improving the state of risk-resilience through collective participation, and reporting of cyber-security, aqua-security, geo-security, and space-security risks in the spirit of global peace through risk management.​ Risk Group LLC, a leading strategic security risk research and reporting organization, is a private organization committed to improving the state of risk-resilience through collective participation, and reporting of cyber-security, aqua-security, geo-security, and space-security risks in the spirit of global peace through risk management.​ Profile
New Year Message

New Year Message

Risk Group in Thought Leadership
  ·   28 sec read
SiteLock