BY LAUREN SARTWELL AND STEPHANIE WHITE BOOKER
In part one of this blog series, we identified some of the hidden use cases of Artificial Intelligence (AI) and Machine Learning (ML) in financial services so providers could begin to take stock of all of the models being used in their operations. Once you’ve identified the areas where your team relies on AI/ML, it’s time to begin the next crucially important step of assessing risk.
Why it’s important to find AI/ML use now
At the end of the day, you can’t manage the risk that you don't know exists, which is why it’s critical to identify all of the AI/ML use cases in your institution and engage with risk and compliance professionals as early as possible. Start by identifying all the areas where your team relies on AI/ML. This might include models built in-house, but may also be core to functions provided by third parties at any point in your product life cycle. Turning a blind eye may change the risk profile of your institution and leave you exposed to increased risk:
Regulatory: Several regulatory agencies including the Consumer Financial Protection Bureau (CFPB) have been evaluating how existing laws, regulations, and guidance should be updated to reflect the increased use of AI/ML in consumer finance. In a May 2022 announcement that adverse action requirements to provide consumers a specific reason for a credit decline applies equally to lenders using algorithmic models to make credit decisions, the CFPB stated “companies are not absolved of their legal responsibilities when they let a black-box model make lending decisions.” The CFPB’s recent update to the UDAAP examination manual to include discrimination in non-lending products also points to increased scrutiny of AI/ML. Given the current active enforcement stance of the CFPB, it’s safe to say regulators will be looking more closely at all uses of AI/ML in your institution.
Reputational: A public enforcement action could come with significant reputational damage for your organization, especially if the allegations involve possible discrimination in your use of AI/ML.
Operational: When an automated process breaks, it almost always causes a downstream impact. If your chatbot breaks, are you prepared for more contacts in your call center? If you aren’t prepared, this may result in longer wait times, which is not a good customer experience.
What to do when you find it
While AI/ML models are clearly beneficial operationally, they also have the potential to increase risk, especially in light of increased regulatory scrutiny and focus on fair lending and discrimination at the CFPB and other state and federal regulators. Since the technology is trained with historical data that may unknowingly and unintentionally reflect discriminatory patterns or biases, the outputs can perpetuate those same problems. This is precisely why model transparency is critical with any use of AI/ML.
Once the use is discovered, financial service leaders should do the following to mitigate risk:
Vendor Management: You are responsible for the models used by your third-party service providers. Be sure model risk management is part of your initial and ongoing due diligence.
Establish guardrails: Once the AI/ML uses are identified in your institution, establish a set of guardrails to regulate their use. Leadership should align and clearly communicate what uses are okay, what controls should be in place to monitor those uses, and if there are any bright lines your institution is not willing to cross with AI/ML. Be sure compliance has an early seat at the table when considering new tools, products, or partnerships that may involve AI/ML.
Model risk management: Develop and document a review and approval process for any new models or changes to existing models. For existing models, conduct periodic testing of outcomes to assess for possible discriminatory model outputs. Finally, when models are updated, they should be validated to ensure the intended outcomes were achieved and that there were no unintended consequences.