The risks faced by business decision makers when buying AI

Terence Tse* and Sardor Karimov** Companies have long declared interest in using artificial intelligence (AI) to stay ahead of their competitors. In...

                
· 4 min read >

Terence Tse* and Sardor Karimov**

Companies have long declared interest in using artificial intelligence (AI) to stay ahead of their competitors. In the past few years, media and academics have illustrated how organisations, especially the technology giants, have used advanced AI technologies to create new advantages and opportunities. As a consequence, we are all left with an impression that an endless number of companies have already adopted a great deal of super advanced AI in their operations from top to bottom.

The reality, as it turns out, is very different. We are often surprised by – if not disappointed at – how slow the speed of adoption is and how cautious companies are when taking up AI. Surely, there are a wide range of factors associated with such slow progress: talent shortage, budget cut, inadequate IT support or insufficient business case are some of the most commonly cited reasons. Another barrier we have encountered frequently is risk, in particular, the risks faced by decision makers.

Usual risk considerations when deploying AI

The risks that come along with business deployment of AI is well known, which have been extensively covered in past research and studies. They typically involve how AI models could become biased and discriminatory, how important data integrity is, how human jobs would be taken over by machines, etc.

In our view, bias related risk does not really matter that much, even for financial services companies. Why? Many companies are merely using AI in back office operations such as document processing to pursue cost savings. The result is that bias risk needs not be a concern for these companies. By contrast, it is undeniable that (lack of) data integrity can pose significant risk to a business. Arguably, one of the most mentioned risks is that technologies automate human tasks, leading to job eliminations. Actually, this claim is widely overblown. There is increasing evidence that business advantages are conferred by technologies alone, but rather by combining technologies with humans in the so-called “expert-in-the-loop”.

The perceived risks that stop AI deployments in business

These are all valid concerns. But a case can be made for one type of risk that is often overlooked but deserves much more attention: the perceived risks of the decision makers. Indeed, such risks are perhaps best reflected in the three common questions when executives are facing AI solution purchasing decisions:

Who is ultimately responsible if something goes wrong?

It is not uncommon for large companies to have technical assessment teams to determine the fitness of a given technology as well as its fittingness with the company’s needs and organisational setup. Interestingly, at the point of purchasing, what often matters most is not how sound the technology under consideration is; instead it is who within the assessment team would be willing to stand up to vouch for the purchase. On many occasions, we have seen consensus on technological viability among different team members, and yet none of them would want to be the ultimate decision maker. The risk perceived is not only ver real but also understandable: if down the road something goes wrong or the technology does not work out as intended or expected, fingers would most likely be pointed at the person who “bought” the technology. In the eyes of the assessment team members, there is simply too much future uncertainties surrounding how the technology would behave once implemented and integrated into the business. Would the AI model work once live data is fed into the system? Would the IT infrastructure be able to handle the AI solution? Would the in-house engineering and data science teams be able to manage and maintain the technology in the long run? Too many questions with too few confident answers.

Do you know what we are buying exactly?

In many companies, it is the business leaders who ultimately make capital expenditure decisions. Inevitably, when AI is involved, they have to completely depend on their own data scientist teams to tackle the technological aspects. They would compel the scientists to understand the technology under consideration inside-out. They would require them to know exactly what they are buying. Naturally, vendors can and should help with this by being completely transparent with what they are selling, allowing the data scientists to have an easier time to see through the technology. However, this is only a partial solution because there remains a great deal of perceived uncertainties in the longer run for the data and leadership teams such as whether the technology vendor would still exist in five years’ time to provide the necessary supports, how scalable the about-to-be-purchased-solution is given the state and nature of the existing IT system, who is going to do what to maintain the onboard technology going forward, etc. The result: it is often easier to simply say no to the purchase and avoid all of these risks.

Can we afford to make a wrong decision?

For many organisations, especially the public institutions, they have got no alternatives but to get the technology right the first time. Any purchase decision now must become a success in the near future. This is because failure to get the technology to deliver the what was promised could be construed as, at best, managerial incompetence and, at worst, misuse of public funds. Under public scrutiny, any outcome short of the anticipated results being fully met would potentially lead to reputational if not legal damages. The stake is therefore often too high for even the most potentially beneficial technologies.

What is in it for me?

With so much uncertainty around the initial and continuous deployment of AI solutions, business and technology executives alike are facing a lot of perceived risks when making purchase decisions. Effectively, they are all taking calculated risks with much of the upside gains accruing to their companies but the downside losses borne by themselves individually. No wonder the deployment of AI in business is progressing at such a slow pace. 

*Terence Tse is a professor of finance at Hult International Business School. He is also a co-founder and Executive Director of Nexus FrontierTech, a scale up that helps companies accelerate decision-making with artificial intelligence. He is author of three books on three topics: megatrends, AI and corporate finance. His fourth is to be published by MIT Press in 2023. Terence is also sought-after speaker internationally and has run workshops for more than 50 companies on digital transformation, trends, and financial management. He has also appeared in a wide range of media channels around the world.

**Sardor Karimov is a Business Development Manager at Nexus Frontier Tech. He has worked on over fifteen use cases for AI with Business Leaders in the Financial Services sector globally and Nexus’s team of AI scientists and ML engineers. He holds a BSc in Economics and Finance from the University of York.

Written by Prof. Terence Tse
Terence Tse is a professor of finance at Hult International Business School. He is also a co-founder and Executive Director of Nexus FrontierTech, a scale up that helps companies accelerate decision-making with artificial intelligence. He is author of three books on three topics: megatrends, AI and corporate finance. His fourth is to be published by MIT Press in 2023. Terence is also sought-after speaker internationally and has run workshops for more than 50 companies on digital transformation, trends, and financial management. He has also appeared in a wide range of media channels around the world. Profile
New Year Message

New Year Message

Risk Group in Thought Leadership
  ·   28 sec read
SiteLock