Regulating Artificial Intelligence

Even if we define ethical and privacy guidelines — because the security risks of AI are mostly unknown–unless we make the unknowns...

                
· 6 min read >

Even if we define ethical and privacy guidelines — because the security risks of AI are mostly unknown–unless we make the unknowns known, and understand their origin, drafting effective AI regulations is impossible.

Introduction

This is an age of artificial intelligence (AI) driven automation and autonomous machines. The increasing ubiquity and rapidly expanding the potential of self-improving, self-replicating, autonomous intelligent machines has spurred massive automation-driven transformation of human ecosystems in cyberspace, geospace, and space (CGS). As seen across nations, there is already a growing trend towards increasingly entrusting complex decision processes to these rapidly evolving AI systems. From granting parole to diagnosing diseases, college admissions to job interviews, managing trades to granting credits, autonomous vehicles to autonomous weapons, the rapidly evolving AI systems are increasingly being adopted by individuals and entities across nations: its government, industries, organizations, and academia (NGIOA).

Individually and collectively, the promise and perils of these evolving AI systems are raising serious concerns for the accuracy, fairness, transparency, trust, ethics, privacy, and security of the future of humanity — prompting calls for regulation of artificial intelligence design, development, and deployment.

While the fear of any disruptive technology, technological transformation, and its associated changes giving rise to calls for the governments to regulate new technologies responsibly are nothing new, regulating a technology like artificial intelligence is an entirely different kind of challenge. This is because while AI can be transparent, transformative, democratized,  and easily distributed, it also touches every sector of the global economy and can even put the security of the entire future of humanity at risk. There is no doubt that artificial intelligence has the potential to be misused or that it can behave in unpredictable and harmful ways towards humanity—so much so that the entire human civilization could be at risk.

While there has been some — much-needed — focus on the role of ethics, privacy, and morals in this debate, security, which is equally significant, is often completely ignored. That brings us to an important question: Are ethics and privacy guidelines enough to regulate AI? We need to not only make AI transparent, accountable and fair, but we also need to create a focus on its security risks.

Security Risks

As seen across nations, security risks are largely ignored in the AI regulation debate. It needs to be understood that any AI system: be it a robot, a program running on a single computer, an application running on networked computers, or any other set of components that hosts an AI, carries with it security risks.

So, what are these security risks and vulnerabilities? It starts with the initial design and development. If the initial design and development allow or encourages the AI to alter its objectives based on its exposure and learning, those alterations will likely occur following the dictates of the initial design. Now, the AI will one day become self-improving and will also start changing its code, and, at some point, it may change the hardware as well and could self-replicate. So, when we evaluate all these possible scenarios, at some point, humans will likely lose control of the code or any instructions that were embedded in the code. That brings us to an important question: How will we regulate AI when humans likely lose control of its development and deployment cycle?

As we evaluate the security risks originating from disruptive and dangerous technology over the years, each technology required substantial infrastructure investments. That made the regulatory process relatively simple and straightforward: follow the massive amounts of investments to know who is building what. However, the information age and technologies like artificial intelligence have fundamentally shaken the foundation of regulatory principles and control. This is mainly because determining the who, where, and what of artificial intelligence security risks is impossible because anyone from anywhere with a reasonably current personal computer (or even a smartphone or any smart device) and an internet connection can now contribute to the development of artificial intelligence projects/initiatives. Moreover, the same security vulnerabilities of cyberspace also translate to any AI system as both the software and hardware are vulnerable to security breaches. 

Moreover, the sheer number of individuals and entities across nations that may participate in the design, development, and deployment of any AI system’s components will make it challenging to identify responsibility and accountability of the entire system if anything goes wrong.

Now, with many of the artificial intelligence development projects going open source and with the rise in the number of open-source machine learning libraries, anyone from anywhere can make any modification to such libraries or the code—and there is just no way to know who made those changes and what would be its security impact in a timely manner. So, the question is when individuals and entities participating in any AI collaborative project from anywhere in the world, how can security risks be identified and proactively managed from a regulatory perspective?

There is also a common belief that to develop AI systems that have the power to cause existential threats to humanity, and it would require greater computational power and will be easy to track. However, with the rise in the development of neuromorphic chips, computational power is soon going to be a non-issue—taking away this tracking capability of an extensive use of computing power.

There is also another issue of who is evaluating security risks? Because irrespective of the stage of design, development, or deployment of artificial intelligence, do the researchers/designers/developers have the necessary expertise to make broad security risk assessments? That brings us to an important question: What kind of expertise is required to evaluate the security risks of algorithms or any AI systems? Would someone qualify to assess these security risks purely based on their background in computer science, cyber-security, or hardware—or we need someone with an entirely different kind of skill set? 

Acknowledging this emerging reality, Risk Group initiated the much-needed discussion on Regulating Artificial Intelligence with Dr. Subhajit Basu on Risk Roundup.

Disclosure: Risk Group LLC is my company

Risk Group discusses Regulating Artificial Intelligence with Dr. Subhajit Basu, an Associate Professor in Information Technology Law (Cyberlaw), Chair: BILETA, Editor: IRLCT, School of Law, the University of Leeds based in the UK.

Complex Challenges in Regulating Artificial Intelligence

Even if we agree on what intelligence is, what artificial intelligence is, or what consciousness is, it seems that from a regulatory standpoint, some of the most problematic features of regulating AI are:

  • Lack of nomenclature and identity for algorithms
  • The security risks emerging from the AI code itself
  • The nature of the self-improvement of the software and hardware
  • Moreover, the interconnected and integrated security risks arising due to the democratization and decentralization of AI research and development (R&D)

So, to begin with, how can we come up with an identity and nomenclature system for algorithms? How can nations effectively regulate the democratized development of AI? Moreover, how can nations effectively regulate AI development when the development work can be globally distributed, and nations cannot agree on the global standards for regulation?

This is very important because the individuals working on any single component of an AI system might be located in different nations. Moreover, most of the AI development is happening in private entities, and the entire cycle of those AI development systems are proprietary property and kept secret.

Evaluating the Regulatory Frameworks

Regulatory frameworks are traditionally made possible by legal scholarship. It seems that the traditional methods of regulation — such as research and development oversight and product licensing — seem particularly unsuitable to manage the security risks associated with artificial intelligence and intelligent autonomous machines.

As seen across nations, many AI guidelines are emerging. There is also a framework proposal developing for AI regulation that is based on differential tort liability. Moreover, the centerpiece of the proposed regulatory framework seems to be an AI certification process, along with a proposal for manufacturers and operators of AI systems to get certified (where certified AI systems will be able to enjoy limited tort liability while those of uncertified AI systems would face strict liability). It is essential to evaluate this proposed regulatory approach of legal liability from a security perspective. If an AI system harms a person, who will be held liable?

Traditionally for most technologies, the liability falls on the manufacturer, but with AI development, how will it be known who has designed the algorithm? It could be anyone from any part of the world. Moreover, as we see algorithms have no name or identity. Furthermore, when intelligent machines become autonomous, it will further make it much more complex for all the stakeholders to be able to foresee emerging security risks proactively. That brings us to an important question: under all these complex scenarios, will the tort liability focus for regulating artificial intelligence ever work?

Tort-based liability systems will be of no use when, for example, any autonomous system decides that humans are now enemies. Whether systems are certified or not, it will make no difference in whether we are managing the security risks emerging from them on time. When the future of humanity is at risk, what difference will it make in whether there is a way to get compensation for humans. Also, who will give compensation to autonomous systems? Machines?

What Next?

Perhaps, it is time to begin a discussion on why the security risks emerging from technologies like artificial intelligence need to be at the heart of any regulation or governance framework that is being defined and developed. Because, unless we identify the security risks and understand their origin, it is next to impossible to regulate technologies like AI proactively and responsibly.

Does this mean we are doomed, and nothing can be done? Of course not! Let us put our collective intelligence to begin a broader conversation across nations on how to proactively identify the security risks emerging from artificial intelligence systems and how to regulate them adequately for the future of humanity. Because while we may not know everything, each one of us knows something, and together we can come up with an effective way to regulate AI. Time is now to give identity to each algorithm emerging from across nations! The time is now to define a security risk governance framework for artificial intelligence!

About the Author

Jayshree Pandya (née Bhatt), Founder and CEO of Risk Group LLC, is a scientist, a visionary, an expert in disruptive technologies and a globally recognized thought leader and influencer. She is actively engaged in driving the global discussions on existing and emerging technologies, technology transformation, and nation preparedness.

NEVER MISS ANY OF JAYSHREE’S POST

Just join here for a weekly update from Jayshree

Written by Risk Group
Risk Group LLC, a leading strategic security risk research and reporting organization, is a private organization committed to improving the state of risk-resilience through collective participation, and reporting of cyber-security, aqua-security, geo-security, and space-security risks in the spirit of global peace through risk management.​ Risk Group LLC, a leading strategic security risk research and reporting organization, is a private organization committed to improving the state of risk-resilience through collective participation, and reporting of cyber-security, aqua-security, geo-security, and space-security risks in the spirit of global peace through risk management.​ Profile
New Year Message

New Year Message

Risk Group in Thought Leadership
  ·   28 sec read
SiteLock