Should AI Be Considered As A Virtual Human Brain Or Another Species With Legal Rights Like Humans?

Mark Montgomery, the Founder, Chief Executive Officer, and Chairman of the Board of KYield, Inc.; an AI pioneer and the Inventor of...

                
· 6 min read >

Mark Montgomery, the Founder, Chief Executive Officer, and Chairman of the Board of KYield, Inc.; an AI pioneer and the Inventor of the patented AI system that serves as the foundation for the Kyield OS: “Modular System for Optimizing Knowledge Yield in the Digital Workplace,” and the patent-pending ‘The Synthetic Genius Machine,’ participate in Risk Roundup to discuss whether “AI Should Be Considered As A Human Virtual Brain Or Another Species With Legal Rights Like Humans?”

Introduction

As we move towards developing AI systems with the potential to create their own subsystems, products, the question emerges as to how we should define AI identity. Many have suggested we define an AI system as a distinct brain, a legal entity in itself, another species not connected to humans. We propose, however, that AI developers should shape AI systems (and their identity) based on the cephalopod octopus, where separate brains are interconnected within one system.

As seen, octopuses have evolved with large nervous systems and great cognitive complexity. They are probably the closest we have to an intelligent alien species, which uses and implements a distributed, decentralized, and interconnected brain system. We propose that AI development should consider an AI system as simply an extension of the human brain, interconnected yet not fundamentally distinct. With respect to security implications for the future of humanity, this approach has significant advantages over legal identities for AI.

The Emerging AI Systems

What we humans have been doing all these years, AI has begun to do as well. Today, Artificial Intelligence (AI) systems bring the potential to generate enormous economic value by giving rise to entirely new products and services. These products include not only various musical compositions, art pieces, a variety of writings, and new services, but also algorithms and potentially patentable inventions. As a result, it is increasingly becoming essential for us to differentiate whether those IP rights belong to the human innovators that have developed the algorithms in the first place or whether the rights belong to the AI system that evolved and is now producing their own subsystems and IP. It is important to note that some nations have already begun a debate on the issue of non-human authorship and inventorship, which is at the core to acknowledge AI systems as another species.

We at Risk Group believe that AI systems should be considered just an extension of humans—in the form of a virtual human brain that is interconnected with the human brain. Thus, any IP emerging from the AI system should belong to the human behind the AI.

Is the AI system a legal entity with a right to ownership?

As we begin to evaluate whether AI should be treated as an extension of a connected human brain or a separate brain, a legal entity with a right to ownership, it is vital to understand the broader implications of recognizing AI as a legal entity or as another intelligent species. We must, therefore, begin to evaluate whether AI should be considered as a connected human brain or another species with legal rights like humans.

Historically, human courts award ownership rights to humans: artists or inventors who drive the creation of the work. But, shouldn’t AI “creativity” or the brain of an AI system simply be viewed as an extension of the human biological brain? We believe that the ownership of the AI IP should belong to the human creator of the AI. Indeed, it is the human who invested the time, energy, and creativity in creating the AI tool, which is simply a virtual human brain that is connected to the biological human brain.

While there are going to be numerous criticisms to what we are proposing, it is crucial to understand that recognizing AI as another intelligent species brings critical security risks for the future of humanity because at the core will be the issue of existential risks for our species. So, we propose to define the AI system as merely a “virtual human brain” that is interconnected with the biological human brain. Additionally, it is important to understand that in the coming years, we may develop more “brains” for ourselves, and they all should be connected with the biological human brain as well.

Virtual Human Brain: AI System’s Connected Human Identity

Irrespective of what or who one is, it is essential to have an identity. As seen across nations, humans have always been deeply driven by their sense of identity, by who they are, and by who they belong to. Now, as we expand beyond human intelligence to make room for artificial intelligence, there is a need to establish a connected human identity for the AI systems rather than going on a path to define it as another intelligent species.

The reason behind this is quite simple: every form of intelligence needs a sense of identity. If the thought of connected human identity is embedded early on in the existence of an intelligent AI system, right when development begins, it will perhaps help the AI system distinguish themselves as an integral part of humans — a virtual human brain that is connected to the biological human brain. This sense of belonging for artificial intelligence systems can provide the necessary anchorage to being a part of a human. Failure to do this could lead to a situation where an AI system believes that it is a separate being or a distinct, intelligent species that needs its own rights, viewing humans as a threat.

We humans often categorize ourselves in terms of our ethnicity or communities, finding it beneficial to live in groups or tribes. The fear of rejection from our tribes, ethnic groups, or communities with which we identify ourselves serves as a powerful force to check our behavior and regulate society. The very thought of losing community backing is enough to discourage many from doing anything against a society’s basic rules and principles. Even today, when humans still define themselves by their ethnic/community membership, should we not use this understanding to safeguard our AI future? If we can instill that feeling of belonging to the human tribe in AI systems, we may have a better chance of preventing AI systems from going off script and hurting humanity.

As we move forward, it is clear that we must start to rethink what artificial intelligence systems broadly mean to humans and what their structural role is to be. It is time to evaluate whether we should consider AI as a virtual human brain that is connected to the biological human brain.


For more, please watch the Risk Roundup Webcast or hear the Risk Roundup Podcast


About the Guest

Mark Montgomery is the founder, chief executive officer, and chairman of the board of KYield, Inc. He is the Inventor of the now patented AI system that serves as the foundation for the Kyield OS: ‘Modular System for Optimizing Knowledge Yield in the Digital Workplace. He is also the Inventor of the more recent patent-pending ‘the synthetic genius machine.’ Mr. Montgomery created a knowledge system (“KS”) lab in the 1990s and retrained in software development and network engineering, becoming proficient in object-oriented programming and multiple languages, then progressed to analytics, search, and artificial intelligence. He designed, built, and operated a data center and numerous networks from scratch. In 1997 and 1998, Mr. Montgomery first conceived the theorem “yield management of knowledge” in his KS lab while operating Global Web Interactive Network (GWIN).

In 2002, Mr. Montgomery founded Initium Capital, an early-stage venture capital firm, which he led until 2009. During his tenure as managing partner for Initium Capital, he reviewed thousands of technologies and business plans, worked with many entrepreneurs, researchers and universities, and served as a panelist at national tech transfer conferences, entrepreneur groups, and universities. In 2004, he attended Thunderbird School of Global Management’s 12-week executive program, where he received certification as a global business leader. He also served as a business plan judge for Thunderbird Master of Business Administration students in 2004 and 2005.
After relocating to Santa Fe, New Mexico, Mr. Montgomery was a frequent visitor and invited guest at the Santa Fe Institute, participating in presentations from scientists around the world in discussing and evaluating emerging research. His articles have been published in books, journals, AltAssets, Wired, Computerworld, The Albuquerque Journal, The Santa Fe New Mexican, and Enterprise Viewpoint.

About the Host of Risk Roundup

Jayshree Pandya (née Bhatt), Ph.D., is a leading expert at the intersection of science, technology, and security and is the Founder and Chief Executive Officer of Risk Group LLC. She has been involved in a wide range of research, spanning security of and from science and technology domains. Her work is currently focused on understanding how converging technologies and their interconnectivity across cyberspace, aquaspace, geospace, and space (CAGS), as well as individuals and entities across nations: their governments, industries, organizations, and academia (NGIOA), create survival, security, and sustainability risks. This research is pursued to provide strategic security solutions for the future of humanity. From the National Science Foundation to organizations from across the United States, Europe, and Asia, Dr. Pandya is an invited speaker on emerging technologies, technology transformation, digital disruption, and strategic security risks. Her work has contributed to more than 100 publications in the areas of science and commerce. She is the author of the books, Geopolitics of Cybersecurity and The Global Age.

About Risk Roundup

Risk Group is a Strategic Security Risk Research Platform and Community.

Through the Risk Roundup initiative, Risk Group is on a mission to talk with a billion people: innovators, scientists, entrepreneurs, futurists, technologists, policymakers, to decision-makers. The reason behind this effort through the Risk Roundup initiative is to research, review, rate, and report strategic security risks facing humanity. This collective intelligence effort is essential to understand where we need to focus for our collective security. And what destructive forces we need to be mindful about.

Risk Roundup is released in both audio (Podcast) and video (Webcast) format. It is available for subscription at (Risk Group WebsiteiTunesGoogle PlayStitcher RadioAndroid, and Risk Group Professional Social Media).

About Risk Group

Risk Group is a Strategic Security Risk Research Platform and Community. Risk Group’s Strategic Security Community and Ecosystem is the first and only cross-disciplinary and collective community that is made of top scientists, security professionals, thought leaders, entrepreneurs, philanthropists, policymakers, and academic institutions from across nations collaborating to research, review, rate and report strategic security risks to protect the future of humanity.

Copyright Risk Group LLC. All Rights Reserved

Written by Risk Group
Risk Group LLC, a leading strategic security risk research and reporting organization, is a private organization committed to improving the state of risk-resilience through collective participation, and reporting of cyber-security, aqua-security, geo-security, and space-security risks in the spirit of global peace through risk management.​ Risk Group LLC, a leading strategic security risk research and reporting organization, is a private organization committed to improving the state of risk-resilience through collective participation, and reporting of cyber-security, aqua-security, geo-security, and space-security risks in the spirit of global peace through risk management.​ Profile
New Year Message

New Year Message

Risk Group in Thought Leadership
  ·   28 sec read
SiteLock