Intelligence is no longer the exclusive dominion of the world’s most powerful institutions or nations. If we are to maintain global peace and security, we need to manage effectively the security risks emerging from the dual use of artificial intelligence.
Introduction
The rapid progress and development in artificial intelligence (AI) is prompting desperate speculation about its dual-use applications and security risks. From autonomous weapons systems (AWS) to facial recognition technology to decision-making algorithms, each emerging use of artificial intelligence brings with it both good and bad. It is this dual nature of artificial intelligence technology that is bringing enormous security risks to not only individuals and entities across nations: its government, industries, organizations, and academia (NGIOA) but also the future of humanity.
The reality is that any new AI innovation might be used for both beneficial and harmful purposes: any single algorithm that may provide critical economic applications might also lead to the production of modern weapons of mass destruction on a scale that is difficult to fathom. As a result, the concerns about artificial intelligence-based automation are growing.
With the advances in machine learning, computing power, borderless investment, and data availability, AI has potentially universal applications, a fact that exponentially complicates the security risks involved. Moreover, since the integration of AI is invisible to the human eye, the ability to cognitize anything and everything fundamentally changes the security landscape. AI’s automation and intelligence bring promise as well as the potential for destruction.
The Democratization of Computing Power
The evolution of computing technologies has democratized technological development, knowledge, information, and intelligence. The rapid growth of data analytics and processing technology position cyber, electronic, quantum and autonomous weapons alongside nuclear, biological and nanoweapons in the elite club of firearms capable of unleashing catastrophic harm to humanity.
This is because connected computers, networks, and cyberspace have become an integral part of all digital processes across NGIOA. Since computer code and connected computers have connected cyberspace to geospace and space, everything is now controlled or goes through cyberspace.
Since AI has the potential to be integrated into virtually every product and service across cyberspace, geospace and space (CGS) to make them intelligent, this evolving cognitive ability fundamentally changes the security landscape for humanity across CGS.
As seen today: Technology, Computing power, Programming languages, Computer code, Encryption, Information, Big data, Algorithms, Investment all have dual use.
When anyone and everyone has access to digital data, necessary machine learning techniques and computing power to create AI for their own agenda, it is difficult to manage the complex security challenges that emerge. Not only are the numbers of potential weapons of mass destruction growing rapidly, but also are the attack surface and the enemies. The security risks arising from the threat of dual use of AI is becoming dire. That brings us to an important question: should the survival, security, and sustainability of humanity be left to the wisdom of individuals, who may or may not be informed of the potential dual-use nature of AI? Perhaps it is time to develop an AI framework.
The Democratization of Data
As digital data in cyberspace becomes contested commons, the democratization of big data ushers in entirely new world order. This new world order brings us a new reality — where anyone from anywhere could access digital data and use it for the good or bad of humanity. Today, any individual or entity, who has a desire and knows how to access big data and has data science capabilities can use it for whatever intelligence, automation, surveillance and reconnaissance they want to achieve, irrespective of their education, background, standing or intentions in society.
While the democratization of big data brings universal accessibility and empowers individuals and entities across NGIOA, it also brings us many critical security risks. Anyone with or without formal training, can accidentally or even purposefully cause chaos, catastrophe and existential threats to community, ethnicity, race, religion, nation, and humanity.
Since data is necessary to build most AI systems, should the democratization of digital information go unchecked, without any accountability and responsibility?
Artificial Intelligence
Recent developments in artificial intelligence, robotics, drones and more have tremendous potential for the benefit for humankind, but much of the same advances bring the perils of unchecked intelligence, surveillance, reconnaissance through biased or lethal algorithms for which no one is prepared.
The reality compounds the complex security risks emerging from these biased and lethal algorithms (which are functions of algorithms, data, computing power and an agenda) that the decision makers involved in developing or deploying artificial intelligence generally do not focus on fundamental issues such as “security.” As a result, building a sustainable culture of “AI security” development necessitates not only making artificial intelligence researchers, investors, users, regulators and decision-makers aware about the dual-use of artificial intelligence development (irrespective of the nature of algorithms), but also educating everyone about any algorithm development that could end up becoming a dual-use dilemma.
As seen across nations today:
- Anyone can create algorithms that can have dual use
- Anyone can purchase necessary data, tools, and technologies to develop algorithms
- Remote developers develop algorithms on anyone’s behalf
- Algorithms can be bought separately and integrated for any purpose
- Automated unmanned aerial vehicles are on the rise
- Autonomous weapons systems are a reality
- AI systems with autonomy are a reality
- Autonomous facial recognition is a reality
- AI Algorithms are vulnerable to threat and manipulation
This raises a fundamental question: How is anyone, any individual or entity from across nations, able to access the necessary digital data and to explore the theoretical science of creating artificial intelligence for any purpose and agenda without any accountability, oversight, and consequences?
The development of such artificial intelligence needs to be audited, mapped, governed and prevented when necessary.
Today’s AI development and landscape introduces uncertainty on many fronts. Because AI is software and is built in not only weapons systems used by the military but in every intelligent application used across NGIOA, the potential of each AI based application to have dual use is a reality that cannot be ignored anymore.
Security Dilemma
For much of human history, the concept and approach to security have revolved mainly around the use of force and the territorial integrity of geospace. As the definition and meaning of security are getting fundamentally challenged and changed in the world of artificial intelligence, the notion that traditional security is about violence towards respective nations in geospace from within or across its geographical boundaries is now outdated and needs to be evaluated and updated.
The emergence of this entirely new world of AI has been more or less like an alien territory today where there are few knowns and mostly unknowns. This has triggered fear, uncertainty, competition and an arms race and is leading us toward a new battlefield that has no boundaries or borders, which may or may not involve humans and will be impossible to understand and perhaps control.
The challenges and complexities of evolving AI threats and security have crossed the barriers of space, ideology, and politics, demanding a constructive, collaborative effort of all stakeholders across nations. Collective NGIOA brainstorming is a necessity, to have an objective evaluation of what is a threat and how can it be secured!
Acknowledging this emerging reality, Risk Group initiated the much-needed discussion on Dual Use Technology Dilemma on Risk Roundup. In the below video I interviewed Prof. Dr. Ashok Vaseashta, Executive Director and Chair of Institution Review Board at NJCU, USA.
Disclosure: Risk Group LLC is my company
Risk Group discusses “Dual Use Technology Dilemma and National Security” with Prof. Dr. Ashok Vaseashta, Executive Director and Chair of Institution Review Board at NJCU, USA; Professor of Professional Security Studies; Chaired Professor of Nanotechnology at the Ghitu Institute of Electronic Engineering and Nanotechnologies, Academy of Science, Moldova and Strategic Advisor to many government and non-government organizations based in United States.
While the debate on the structure, role and dual use
of AI will continue in the coming years, any attempt to redefine AI security
needs to begin with identifying,
understanding, incorporating and broadening the definition and nature of AI
security threats. Although the focus of this article is on AI, many other
technologies need to be evaluated for its dual use potential. The time is now
to begin a discussion on the dual use of emerging technology.
About the Author
Jayshree Pandya (née Bhatt), Founder and CEO of Risk Group LLC, is a scientist, a visionary, an expert in disruptive technologies and a globally recognized thought leader and influencer. She is actively engaged in driving the global discussions on existing and emerging technologies, technology transformation and nation preparedness.
NEVER MISS ANY OF DR. PANDYA’S POSTS
Just join here for a weekly update
Copyright Risk Group LLC. All Rights Reserved