The most critical factor in violence is perhaps biology rather than politics. So, while we humans have neural circuits of rage and fury, do we need to give the same to intelligent autonomous machines? Or is there another way?
Introduction
Humans are the most relentless and oblivious killers on earth, and our violence operates far outside the bounds of any other living species. This lethal violence is extensive in human society as we not only kill other species, but we kill our own as well. The reason behind this is that humans have neural circuits of rage and violence encoded in their biology, and the human brain controls destruction — like all human behavior.
So, as the efforts intensify to build human brain-like capacity in machines, it needs to be understood that, when neural circuits of violence are knowingly/unknowingly replicated in machines, we are allowing the replication and transferring the triggers of violence in machine intelligence.
Now, irrespective of man or machine, the most critical factor in violence will perhaps not be politics but biology. So, while we humans have neural circuits of rage and fury, do we need to give the same to intelligent machines? Since empathy and violence have the same channels in the human brain and as we replicate the same qualities in smart machines, the rapidly evolving autonomous systems are raising red flags for the future of humanity.
However, there is another way, one in which an octopus can be a model for developing a reliable functioning of artificial intelligence.
The Rise in Autonomous Systems
As seen across nations, autonomous systems that can think and act on their own are on the rise. Since these systems are on their way, the risks resulting from autonomous system applications could very well doom humanity. While the emerging autonomous systems technology seems to be transformative and disruptive and holds the potential for enabling entirely new intelligence and problem-solving capabilities for human ecosystems in cyberspace, geospace and space (CGS), the very idea of an intelligent autonomous system that has human-like neural circuits and where both hardware and software work together: to gather information, find a solution based on the collected data, and execute an action (even an effort to kill a human) to achieve a goal is becoming frightening.
It is essential to understand that autonomous systems are more than just unmanned machines. When these unmanned autonomous systems which are capable of adapting to changing conditions, which have the knowledge, and which have no inbuilt constraints in their code, are assigned broad objectives to increase performance, enhance safety and security, and reduce cost, the actions and outcomes they could take to achieve those broad goals (based on the human-like neural circuits) make them highly unpredictable and likely prone to violence. It wouldn’t be unreasonable for the machine to turn towards competition, destruction, and even elimination of other species—including the human species.
So, the question is while autonomous systems are becoming essential for a nation’s science and technology vision, should we create intelligence with neural circuits like a human, allow them to think on their own, and trust them blindly not to eliminate us.
Evolving Autonomous Systems
Across nations, a wide range of increasingly sophisticated autonomous systems for practical applications is on their way. They exhibit high degrees of autonomy and can perform tasks independently of human operators and without human control. Without direct human intervention and control from the outside, these evolving autonomous systems today can interact with humans, can interact with machines, can conduct dialogues with customers in online call-centers, can steer robot hands to pick and manipulate objects accurately and incessantly, can buy and sell stock in large quantities in milliseconds, can forecast markets, can fly drones and shoot weapons, can navigate cars to swerve or brake and prevent a collision, can classify individual humans and their behavior, can impose fines, can launch cyber-attacks, can prevent cyber-attacks, can clean homes, can cut grass, can pick weeds, can provide surveillance and security, can lend loans and can even launch space missions.
The weaponization of artificial intelligence has begun, and the power of self-governance is already on its way to being transferred to intelligent autonomous machines in some cases. This gives autonomous systems the ability to act independently of any direct human control and explore unrehearsed conditions for which they are not trained to do so — to go places it has never gone before.
While this is remarkable progress, we must turn our heads from the promise of autonomous systems to the perils.
Acknowledging this emerging reality, Risk Group initiated the much-needed discussion on Autonomous Systems with Dr. Hans Mumm on Risk Roundup.
Disclosure: Risk Group LLC is my company
Risk Group discusses Autonomous Systems with Dr. Hans Mumm, an Autonomous Systems Expert, a Futurist and Principal Investigator of Victory Systems based in the United States.
Complex Challenges and Risks
What challenges will human civilization face as we expand the application of autonomous systems? What are the risks? While it is essential to understand and evaluate the development of autonomous systems’ capabilities, reach and impact, it is more important to assess the complex challenges and risks it would bring for the future of human civilization.
As we witness the growing number of applications of autonomous systems, we need to begin evaluating the associated risks by understanding first and foremost: why are we giving the autonomous systems a human-like neural circuit/brain? If we have not been able to make the human race accountable, why would we want to create another intelligent species with human-like neural circuits, and why would we expect them to be liable and responsible?
It is unfortunate that the rapid progress and development of autonomous systems brings us the most obscurity for our survival and security. However, it is not too late. We can reverse and correct the course in developing the human-like (violence prone) machine intelligence brain.
What Next?
While the principle of autonomy implies the freedom in action and decision-making, as we move towards developing autonomous systems (with the current approach to its design and development), the doom of human civilization is almost guaranteed! It is therefore urgent that each one of us needs to understand and evaluate why we are replicating the science of human violence in intelligent autonomous machines.
Perhaps there is another way. Let us define and design the neural net brain chips to reflect not the centralized vertebrate human brain (as everyone is doing now) but perhaps on the cephalopod octopus. Octopuses have evolved with much larger nervous systems and greater cognitive complexity, and they are probably closest to an intelligent alien species, which uses and implements a distributed decentralized system. This is perhaps central to the argument that as we move towards developing a decentralized economy — does this approach to artificial intelligence-driven autonomous systems not make better sense for the future of human civilization and its security?
Time is now to decide whether we should move towards developing autonomous systems with the mind of an octopus or the mind of a human!
About the Author
Jayshree Pandya (née Bhatt), Founder and CEO of Risk Group LLC, is a scientist, a visionary, an expert in disruptive technologies and a globally recognized thought leader and influencer. She is actively engaged in driving the global discussions on existing and emerging technologies, technology transformation and nation preparedness.
NEVER MISS ANY OF DR. PANDYA’S POSTS
Just join here for a weekly update
Copyright Risk Group LLC. All Rights Reserved