The use cases of artificial intelligence (AI) and machine learning in front-office, middle-office and back-office activities at banks are growing slowly but steadily. The major areas of AI play include customer service (virtual assistants, chatbots, etc.), fraud detection, risk management, predictive analytics, and automation. Like in any other industries, AI, if implemented the right product in the right manner, can increase the efficiency of banking operations as well as reduce their cost (up to more than USD 1 trillion by 2030, according to experts). Of course, there are problems related to distinct data sets and data privacy that curtail the implementation of these technologies. However, AI would turn into a new normal at banks as existing workflows are set to become unsustainable due to the ever-increasing scale of operations. These days, most banks are operating round the clock due to the emergence of online banking and mobile banking. Along with that, the financial inclusion initiatives across the globe would see a gigantic rise in the volume of banking operations. Therefore, banks would require rapid processing abilities to stay relevant and ensure the satisfaction of various stakeholders including customers and regulators.
Though the field is somewhat set for AI at banks with the advent of mobile technology, data availability and abundance of open-source APIs, there are certain systemic problems that banks are concerned about. Banks are worried if their regulators would accept the use of technologies, which are relatively new and different from the existing ones to a great extent. There are also risks related to possible biases in machine learning algorithms due to data quality and data accuracy. Black box AI algorithms are another concern that can hinder the adoption of AI in banking. Here, we are trying to explain the concept of black box AI, its problems and how banks can overcome the challenge.
What is black box AI?
Black box AI is a problem in machine learning where even the designers of an algorithm cannot explain why and how it arrived at a specific decision. The fundamental problem here is: if we cannot figure out how AI has come up with its decisions, how can we trust AI? This trust issue led to the failure of IBM Watson (especially (Watson for Oncology), one of the best-known AI innovations in recent times. The main problem with a black box model is its inability to identify possible biases in the machine learning algorithms. Biases can come through prejudices of designers and faulty training data, and these biases lead to unfair and wrong decisions. Bias can also happen when model developers do not implement the proper business context to come up with legitimate outputs.
The same problem is relevant in the banking industry as well. If regulators pose a question: how AI has reached at a conclusion with regard to a banking problem, banks should be able to explain the same. For example, if an AI solution dealing with anti-money laundering compliance comes up with an anomalous behaviour or suspicious activity in a transaction, the bank using the solution should be able to explain the reason why the solution has arrived at that decision. Such an audit is not possible with a black box AI model. The same concern was expressed by Federal Reserve Gov. Lael Brainard in a November 2018 speech. “AI can introduce additional complexity because many AI tools and models develop analysis, arrive at conclusions, or recommend decisions that may be hard to explain. For instance, some AI approaches are able to identify patterns that were previously unidentified and are intuitively quite hard to grasp. Depending on what algorithms are used, it is possible that no one, including the algorithm's creators, can easily explain why the model generated the results that it did,” she said.
Not just AI, Banks need explainable AI
Explainable AI or interpretable AI or transparent AI deals with techniques in artificial intelligence which can make machine learning algorithms trustworthy and easily understandable by humans. Explainability has emerged as a critical requirement for AI in many cases and has become a new research area in AI. As mentioned by the Defense Advanced Research Projects Agency under the US Department of Defense: “New machine-learning systems will have the ability to explain their rationale, characterize their strengths and weaknesses, and convey an understanding of how they will behave in the future.”
In the banking industry, which is subject to stricter regulatory oversight across the globe, an incorrect decision can cost billions of dollars for an institution. If a bank wants to employ AI, it is imperative for it to subject the particular solution to rigorous, dynamic model risk management and validation. The bank must ensure that the proposed AI solution has the required transparency depending on the use case. As an AI solutions provider, Tookitaki has always considered explainability as a must-have feature in its offerings. Its unique technology demystifies modern machine learning and gives clients the knowledge and tools to outperform the competition. Tookitaki solutions feature a ‘Glass box’ audit module that brings algorithmic transparency by providing thorough explanations for predictions.
There is no doubt that AI can bring in revolutionary changes in the banking sector. For that to happen, it is mandatory that banks should take the necessary oversight to prevent their AI models from being a black box. As of now, the AI use cases are mostly in low-risk banking environments, where human beings still take the final decision with machines just providing valuable assistance in decision making. In future, banks will be under pressure to remove some of the human oversight for cost savings amid increasing scale of operations. At that point, banks cannot run with risky black box models that can lead to inefficiencies and risks. They need to ensure that their AI solutions are trustworthy and have the required transparency to satisfy internal and external audits. In short, the bright future of AI in banking could be assured only through explainable AI.
Anti-Financial Crime Compliance with Tookitaki?