The Troubling Trajectory of Technological Singularity

As the intelligence explosion seems inevitable, the troubling trajectory of technological singularity forces us to think seriously about what we want as...

                
· 5 min read >

As the intelligence explosion seems inevitable, the troubling trajectory of technological singularity forces us to think seriously about what we want as a species.

Introduction

As humanity stands on the brink of a technology-triggered information revolution, the scale, scope, and complexity of the impact of intelligence evolution in machines are unlike anything humankind has experienced before. As a result, the speed at which the ideas, innovations, and inventions are emerging on the back of artificial intelligence has no historical precedent and is fundamentally disrupting everything in the human ecosystem.

In addition, the breadth, depth, and impact of intelligence evolution on furthering of ideas and innovations across cyberspace, geospace, and space (CGS) herald the fundamental transformation of entire interconnected and interdependent systems of basic and applied science: research and development, concept to commercialization, politics to governance, socialization to capitalism, education to training, production to markets, survival to security and more.

The technology triggered intelligence evolution in machines and the linkages between ideas, innovations, and trends have brought us to the doorsteps of singularity. Irrespective of whether we believe that the singularity will happen or not, the very thought raises many concerns and critical security risk uncertainties for the future of humanity. This forces us to begin a conversation with ourselves and with others (individually and collectively) about what we want as a species.

While there is no way to calculate just how and when this intelligence evolution will unfold in machines, one thing is clear: it changes the very fundamentals of security, and the response to it must be integrated and comprehensive.

Acknowledging this emerging reality, Risk Group initiated the much-needed discussion on Artificial Intelligence (AI) Driven Technological Singularity with David Wood on Risk Roundup.

Disclosure: Risk Group LLC is my company

Risk Group discusses Artificial Intelligence-driven Technological Singularity with David Wood, Chair at London Futurists, Principal @ Delta Wisdom, and a Pioneer of the Smart Phone Industry based in the UK.

Information Evolution

While we humans have changed our ecosystem numerous times, intelligent machines are expected to improve and expand the human ecosystem in unprecedented ways. As seen across nations, the connected computers, information, communication, digitization technology, and the internet has fundamentally disrupted the information process created by humans, which has given rise to an information age. 

The success of the rapidly progressing information age is based on data, and data and information are all around us. When we evaluate the information and intelligence evolution in machines, we realize that it has transformed human life in CGS through fundamental innovations in how we generate digital data and information, how we capture and collect the digital data, how we store it, how we retrieve and replicate the information—thereby blurring the boundaries of languages, time zones, politics, ideology, race, religion, and culture. 

As seen, the quantity and quality of digital data and information created and stored by humans are multiplying rapidly in cyberspace (reaching about several quintillion bytes of data already). As of now, the current human population is around 7.7 billion, and the number of humans getting connected keeps growing. Moreover, with increasing human connectivity, the data and information keep growing as well, feeding machines what they need to allow them to see patterns, learn and make them more and more intelligent.

Progress in Machine Learning Algorithms and Neuromorphic Chips

On the back of the ongoing information revolution, as the machine learning algorithms improve rapidly, the power and promise of software that learn by example, patterns and models seem immense. Along with the impressive advances in software development, there also appears to be now a parallel evolution in computer hardware to enhance machine intelligence capabilities. This is visible in the intensification of ongoing efforts towards developing systems on a chip, focused on redesigning more efficient, lower energy consuming microprocessor chips that mimic the human brain circuitry. As these rapidly evolving neuromorphic chips are being designed to process human sensory data such as images, smell, and sound and to respond to changes in that data in ways not explicitly programmed, a lot is expected to change for machine intelligence and artificial intelligence evolution. This is mainly because any effort or initiative in shrinking down the power of a neural net (based on a human brain or octopus brain) onto a single semiconductor chip means that these learning, modeling, and pattern recognition algorithms and technologies can now be embedded into a broader range of systems in future—thereby increasing data and information capabilities for exponential growth in machine intelligence. This is expected to change how we gather information and intelligence fundamentally.

As a result, as the computing power of the rapidly evolving computers will exceed that of even the most intelligent and evolved human brain, the exponential growth in machine intelligence will continue towards the singularity. Artificial superintelligence seems to be just around the corner.

Impact of the Intelligence Explosion

There is no doubt that when a superintelligence emerges through artificial intelligence, it will bring to bear greater problem-solving and inventive skills than current humans are capable of. However, would that not also mean creating another species with intelligence that may or may not have human interest at heart? What happens to human intelligence and the human race at that point of singularity?

Keeping up with Super Intelligence

This brings us to an important question: amidst the rapidly evolving and converging technologies when the intelligence explosion seems inevitable, how will humans keep up with the super-intelligent machines?

Now, many believe that to overcome artificial superintelligence, many emerging methods can be used to enhance human intelligence and create a superhuman with super intelligence. While in theory, creating a superhuman with superintelligence is likely possible through intelligence amplification of the human brain and/or intelligence augmentation (through advances in bioengineering, genetic engineering, nootropic drugs, mind uploading, and even direct brain-computer interfaces, AI assistants, and more), the reality is the evolution of the human brain and human intelligence is a very complex endeavor with too many unknowns, dependencies, and variables. 

While evaluating superhuman intelligence is not the focus of this article, the possibility of the human brain and intelligence evolution needs to be researched further to keep up with artificial superintelligence.

Economic Impact of Singularity

Some form of automation has often driven economic progress, and A.I. has begun to increase automation in the production of goods and services across nations and jobs. So, the question emerges, what happens if everything can be automated — that is, if A.I. can replace people and processes, what would economic growth look like?

Beyond the Technological Singularity Timeline

While there can be no definite timeline or consensus on when superintelligence is likely to be achieved, one thing is clear: that the troubling trajectory of technological singularity forces us to think seriously about what we want as a species. Irrespective of whether the singularity is driven by artificial intelligence or any other technological means, it is bound to trigger a technological tsunami, resulting in unfathomable changes and challenges to human civilization and its ecosystem in cyberspace, geospace, and space.

Singularity and Security Risks

Since there is no direct evolutionary motivation for an AI to be friendly to humans, the challenge is in evaluating whether the artificial intelligence-driven singularity will — under evolutionary pressure — promote their survival over ours. The reality remains that artificial intelligence evolution will have no inherent tendency to produce or create outcomes valued by humans– and there is little reason to expect a result desired by humankind from any superintelligent machine.

What Next

We humans are living in a paradox, as the achievements of artificial intelligence advances are shaping human ecosystems with more severe and more valuable opportunities than ever before. Whether promise or peril prevails will define and determine the future of humanity.

This article is not to make timeline predictions to a singularity but rather to begin the discussion on the troubling trajectory of artificial intelligence evolution for the future of humanity. Whether we believe that singularity is near or here, the very thought raises crucial security and risk questions for the future of humanity, forcing us to think seriously about what we want as a species.

About the Author

Jayshree Pandya (née Bhatt), Founder and CEO of Risk Group LLC, is a scientist, a visionary, an expert in disruptive technologies and a globally recognized thought leader and influencer. She is actively engaged in driving the global discussions on existing and emerging technologies, technology transformation, and nation preparedness.

NEVER MISS ANY OF DR. PANDYA’S POSTS

Just join here for a weekly update

Copyright Risk Group LLC. All Rights Reserved

Written by Risk Group
Risk Group LLC, a leading strategic security risk research and reporting organization, is a private organization committed to improving the state of risk-resilience through collective participation, and reporting of cyber-security, aqua-security, geo-security, and space-security risks in the spirit of global peace through risk management.​ Risk Group LLC, a leading strategic security risk research and reporting organization, is a private organization committed to improving the state of risk-resilience through collective participation, and reporting of cyber-security, aqua-security, geo-security, and space-security risks in the spirit of global peace through risk management.​ Profile
New Year Message

New Year Message

Risk Group in Thought Leadership
  ·   28 sec read

Is America In Decline?

Risk Group in Geopolitics
  ·   5 min read
SiteLock