The battle for decision-making authority is emerging between man and machines. As we witness the rise of algorithms in decision-making, what will be the implications of relinquishing decision-making control to intelligent machines for humanity?
Introduction
In pursuit of automation-driven efficiencies, the rapidly evolving artificial intelligence (AI) tools and techniques (such as neural networks, machine learning, predictive analytics, speech recognition, natural language processing and more) are now routinely used across nations: its governments, industries, organizations and academia (NGIOA) for navigation, translation, behavior modeling, robotic control, risk management, security, decision making and many other applications.
As AI is becoming democratized, these evolving intelligent algorithms are now rapidly becoming prevalent in most, if not all, aspects of human and machine decision-making. While Decision Utilities like intelligent algorithms have been in use for many years, there are rising concerns about the general lack of algorithmic understanding, usage practices, the rapidly penetrating bias in automated decisions, and the lack of transparency and accountability. As a result, ensuring integrity, transparency and trust in algorithmic decision-making is becoming a complex challenge for the creators of algorithms with enormous implications for the future of society.
Human versus Machine Decision-Making Processes
Irrespective of cyberspace, geospace or space (CGS), since technology revolutions are driven not just by accidental discovery but also by societal needs, the question we all individually and collectively need to first and foremost evaluate is whether there really is a need for decision-making algorithms and if yes, where and why. Furthermore, what gaps in the human decision-making process can be filled by decision-making algorithms?
As seen, artificial intelligence tools and techniques are increasingly expanding and enriching decision support not only by coordinating diverse data sources delivery in a timely and efficient manner but also by analyzing evolving data sources, trends, providing defined forecasts, developing data consistency, quantifying uncertainty of all data variables, anticipating the human or machine user’s data needs, providing information to the human or machine user in the most appropriate forms and suggesting decisive courses of all possible action based on the intelligence they gather. Understandably, this is being welcomed—since, in a fast-changing digital age environment, it is becoming difficult for human decision-makers to keep up, analyze the mountains of growing data in front of them and make informed and intelligent decisions.
However, even for an algorithmic decision-making process, there are complex challenges to reach an informed decision. For example, it is difficult to know whether decision-making algorithms will be able to make effective decisions with the current computing and data analytics infrastructure and processing capability. While it seems very likely that artificial intelligence will become universal in most, if not all, aspects of decision-making soon, it will be interesting to see how the emerging competition between human decision-making versus AI decision-making will play out.
Algorithmic Engineering Process and Penetration of Bias
While there are growing concerns about machine learning decision-making models, it seems AI is being woven into the very fabric of human society, and everything individuals and entities do across nations: its government, industries, organizations, and academia (NGIOA) in cyberspace, geospace, and space (CGS).
Moreover, it needs to be understood that the rapidly evolving machine-learning model is not a static piece of code since we are continually feeding it data from diverse sources and are regularly training, re-training and fine-tuning it to how predictions can be made. In each of these data journey steps, humans at the moment play a significant and influential role. As a result, while machine-learning models are becoming almost like a living, breathing thing with a growing, dynamic data ecosystem from CGS around it, the very involvement of humans brings with it the same complex human bias. Since we are trying to re-define and re-design systems that brings us more trust and transparency, there is a clear need to promote equality, transparency, and accountability in algorithm design and development for decision-making—and to ensure that data transparency, training, review, and remediation are being considered throughout the entire algorithmic engineering process.
According to ProPublica, an investigative journalism organization, a computer program used by US courts across the nation has been reported to be biased against black prisoners. The program, named the Correctional Offender Management Profiling for Alternative Sanctions, mistakenly flagged black defendants as likely to re-offend at almost twice the rate as white defendants (45% to 24%). The program likely factored in the higher rates of arrest for black people into its predictions but was not able to escape the same racial biases that contributed to those higher levels of arrests. Bias has also been reported in granting credit to home buyers, even going as far as to violate the Fair Housing Act potentially. Rates of defaulting may be higher in some neighborhoods, but an algorithm using this information to make black and white calls runs the risk of heading towards “red-lining” territory. Examples abound, with plenty of cases to show AI and technology to be both sexist and racist. Let’s not forget Google’s search algorithm including black people in the results of a search on “gorilla.”
While decision-making algorithms are inherently not biased and algorithmic decision-making depends on many variables — including how the software is designed, developed, deployed and the quality, integrity, and representativeness of the underlying data sources — there is a need for a new approach to define and design decision-making algorithms. We perhaps need adaptive computing that integrates intelligence gathering into its very fabric and which does not rely on humans training the algorithms in how to make decisions.
Since it is essential to evaluate what the implications will be if bias penetrates decision-making algorithms—it brings us to assess further whether data protection safeguards can be built into the algorithms from the earliest stages of development to prevent bias from penetrating them. It is essential to address this as the very foundations of the systems that are being re-defined and re-designed depend on it.
Now, since it is not possible to interrogate algorithms, and there are no active rules or regulations around decision-making algorithms that focus on the algorithmic accountability, how to remove bias remains a complex challenge facing society.
Acknowledging this emerging reality, Risk Group initiated the much-needed discussion on Algorithmic Decision-Making with Prof. (Dr.) Steve Omohundro, President at Possibility Research, based in the United States.
Disclosure: Risk Group LLC is my company
Risk Group discusses Algorithmic Decision- Making with Prof. (Dr.) Steve Omohundro, President at Possibility Research, based in the United States.
Perhaps the key to making decision-making algorithms work for everyone on a fair and balanced playing field is to build in accountability, responsibility, neutrality, and outcomes from the very beginning—right in the code. If not, without question, all the efforts that are being put in re-defining and re-designing the systems in cyberspace, geospace and space will bring no real value for society overall.
Data: Nature, Sources, Efficiency
The growing availability, volume, and accumulation of diverse sources of data mean it can be overwhelming for any human decision maker to effectively make decisions—irrespective of whether these are strategic decisions or tactical. Therefore, it is essential to evaluate what role dynamic growing data plays in how the algorithms are structured to take into consideration the growing data input. Moreover, it is necessary to assess whether or not irrespective of the nature or source of data, would decision-making algorithms give consistent decisions using different scenarios, models, tools and techniques? Also, how is this tested?
So, while efficiency seems to be at the core of many emerging automation applications, the transparency and integrity of the data on which the algorithmic decisions are being made will be critical to ensure its accountability. That brings us to another question–is there a way algorithm can detect data sources, credibility, authenticity, transparency and rate the integrity of the algorithmic decision itself? An algorithmic disclaimer?
The democratization of computing infrastructure allows anyone to build an algorithm any way they want to. However, when it comes to its decision-making applications for systems at all levels: global, national, or local (may it be government agencies, banks, credit agencies, courts, prisons, education institutions etc.,) there is a need for a global standard on the best practices to define and determine whose algorithm can be used for equality, fairness and objectivity. That brings us to another question of who will be testing the different versions of algorithms and rating them for public use?
The question now is not just whether humans or machines will take decisions—instead, it is about whether intelligent algorithms replacing humans in decision-making will bring with them the same biases of race, religion, class, gender, and ideology that are harmful to society. So, the question remains, what do we do about it? Moreover, will we ever be able to build genuinely objective and transparent machines?
What Next?
The question today for each one of us to individually and collectively evaluate is whether intelligent algorithms are and will remain an aid to human decision-making process or whether they will become the ultimate decision makers. Also, if artificial intelligence becomes the decision maker, what will be the implications of relinquishing decision-making control to intelligent machines–as the very use of automated AI based decision-making techniques raises challenges for humanity as a whole.
About the Author
Jayshree Pandya (née Bhatt), Founder and CEO of Risk Group LLC, is a scientist, a visionary, an expert in disruptive technologies and a globally recognized thought leader and influencer. She is actively engaged in driving the global discussions on existing and emerging technologies, technology transformation and nation preparedness.
NEVER MISS ANY OF DR. PANDYA’S POSTS
Just join here for a weekly update
Copyright Risk Group LLC. All Rights Reserved