BY POORANI JEYASEKAR AND DAT TRAN
President Biden’s executive order on the safe, secure, and trustworthy development and use of Artificial intelligence is a big step towards a regulatory framework around generative AI. This bill provides five broad principles to design safe and effective systems, avoid algorithmic discrimination, and manage data privacy, notices, and explanations. It also offers guidelines for providing human alternatives, managing AI system failure, and planning for contingencies.
While the executive order may serve as a cornerstone for further regulatory directives from the agencies, it does not appear that regulators are rushing to develop rules on generative AI just yet. Instead, as indicated by regulators at a recent DC Fintech Week conference, they are closely monitoring the technology and looking to the industry to establish guardrails before developing regulations.
Absent more specific regulatory directives, existing risk management principles will still apply. Institutions will need to maximize the utility of current regulatory guidance and frameworks on innovation, like Biden’s executive order. In this second part of our generative AI series, we discuss how risk management professionals should approach this technology.
How can risk management professionals support compliance, “responsible” implementation of generative AI?
Incorporating generative AI with a risk management mindset necessitates a methodical approach. Here are a few key things to think about:
1. Discovery and understanding: Financial institutions should understand the use cases with generative AI to evaluate its risks and benefits. One key question to ask is whether the proposed approach with generative AI aligns with the institution's strategic objectives and needs, and is consistent with generative AI’s current capabilities (e.g., use cases requiring high levels of reliability are likely imprudent).
2. Experiment carefully and set up appropriate guardrails: Financial institutions should create a controlled test environment for experimentation that allows them to test and fine-tune generative AI models without exposing critical systems to unnecessary risk. Depending on the model’s application, financial institutions should establish governance oversight, policies, and monitoring processes commensurate with the level of risk associated with the intended function of the generative AI. In doing so, we recommend financial institutions also follow these principles:
Prioritize data security and privacy: Safeguard confidential and proprietary information rigorously.
Avoid hasty automation: Exercise caution and ensure human oversight is incorporated at every step of development and deployment.
Focus on transparency: Financial institutions should establish a clear record of the generative AI's purpose, functionality, and data sources.
Evaluate customer impact: Conduct risk assessments to identify vulnerabilities and mitigation strategies while considering customer impact.
3. Leverage regulatory guidance: Before any kind of implementation, review regulatory guidance that could pertain to generative AI for the following purposes:
4. Adopt frameworks for innovation: Financial institutions should adopt frameworks for developing AI models. The National Institute of Standards and Technology (NIST) recently released its AI Risk Management Framework, which consists of four key functions—map, measure, manage, and govern—to help facilitate risk assessments and address potential harm to individuals, organizations, and ecosystems. It outlines risk management steps and its responsible parties during the AI design, development, and deployment stages.
By following the best practices outlined here, financial institutions can be more systematic and structured in managing the risks associated with generative AI. Remember that the current regulatory frameworks around risk management remain applicable to all financial institutions, even in the face of evolving technology. Without further regulation and guidance, regulators will expect financial institutions to maintain consistency in their risk management practices. While risks must be managed, we do not feel financial institutions should be shying away from this technology. On the contrary, embracing generative AI, conducting experiments safely and soundly, gaining insights during the beginning stages of development and/or deployment, and consulting regulatory guidance and innovation frameworks are proactive steps that will keep financial institutions at the forefront of technological innovation while protecting their customers.