top of page

Klaros Insights on Generative AI - Part 1

BY POORANI JEYASEKAR AND DAT TRAN


In today’s competitive landscape, the mantra of ‘doing more with less’ resonates across multiple industries, and financial institutions are no exception. Experts agree that adopting generative AI technologies and Large Language Models (LLMs) is a necessity to avoid falling behind, and financial institutions are rethinking their strategies and considering how to adopt this technology. Some large financial institutions are committing millions to developing generative AI solutions, while other smaller institutions are looking at vendors to augment their existing processes. In this two-part series on generative AI, we’ll cover the present landscape of this technology, including its associated risks, potential applications, regulatory frameworks, and recommended implementation strategies.


Part 1


What can this technology do, and what risks does it create?


Financial institutions have long employed machine learning models to predict risk and perform quantitative analysis, which begs the question—what sets generative AI apart? The graphic below breaks down various terms to help.

Generative AI has risks and challenges:

  • The technology is nascent. While not necessarily a bad thing, its relative novelty means that generative AI is still evolving. As such, unforeseen issues or limitations will continue to emerge as the technology becomes more widely adopted.

  • Current regulatory guidance is still catching up. Generative AI exists in a rapidly changing regulatory landscape. Financial institutions will continue to struggle with regulatory uncertainty around compliance standards because existing regulations will apply to activities conducted using generative AI, but do not consider the unique aspects of generative AI.

  • Data accuracy and hallucinations. Generative AI can produce inaccurate or misleading data. The risk of hallucinations—where AI generates false information in response to queries—poses significant challenges when the technology is used in a critical decision-making context.

  • Explainability. Understanding how generative AI arrives at its conclusions can be difficult due to the complex neural networks on which they are based. This opacity can make it hard to see how and why specific outputs or recommendations are generated. Regulators are paying close attention to the input data and algorithmic models used to prevent consumer harm.

  • Fairness and bias. If the training data used for generative AI models contains biases, AI may perpetuate these biases in its output, potentially leading to unfair or discriminatory outcomes. This issue, especially in the context of lending algorithms, may introduce bias or discrimination.

  • Data security/privacy. Users who use open-source LLMs (e.g., ChatGPT) risk inputting sensitive, confidential, or proprietary data, which raises data security and privacy concerns.

  • Ethical issues. There are questions surrounding accountability and responsibility for using this new technology. For example, who (or what) is responsible when AI systems generate undesirable outcomes? Another ethical consideration would be intellectual property and plagiarism concerns, especially if generative AI is used for content generation.

  • Third-party risks. Vendors have begun to offer generative AI solutions. Financial institutions considering AI vendors to augment their processes must account for third-party risks, such as whether they have robust technology infrastructure to secure and protect data, strong governance, and policies and procedures to protect consumer rights and information.


Generative AI and the Regulatory Landscape


The graphic below shows generative AI use cases and corresponding regulations and guidelines within the banking and finance world. Because generative AI technology is continuously evolving, the use cases depicted are neither exhaustive nor mutually exclusive. Our aim with the graphic is to depict the variety of current use cases visually. To do this, we tried to group use cases according to their general function, though some use cases can fall into multiple categories.

Clearly, abundant opportunities exist for exploring and incorporating generative AI technologies. Today, LLMs enhance existing artificial intelligence capabilities through predictive machine learning and analytics. For instance, using LLMs in the loan origination workflow at the beginning of the application process can help identify red flags and improve loan application data quality. Previous AI machine learning models, such as predictive models, were data-centric and required programming knowledge to interact with them. In contrast, the beauty of LLMs is their task-centric nature, which enables them to learn efficiently and provides users with an easy interface to interact with data. Because of their task-centric nature, LLMs can function as powerful search engines, allowing users to query data within their training scope without being technically proficient. As financial institutions are stretched thin on resources, using LLMs with existing machine-learning models can boost operational efficiency and increase employee productivity, enabling institutions to quickly adapt to changing business needs.


Adopting generative AI in the financial landscape signifies a pivotal shift towards innovation and efficiency. As this technology continues to evolve, financial institutions will need to embrace generative AI cautiously and align with established regulatory frameworks. Stay tuned for part two, where we dive into what you can do if you're a risk management professional considering adopting generative AI.



Photo by fabio on Unsplash

bottom of page