Technologists at Citi, Morgan Stanley and the London Stock Exchange have helped write a document that provides practical advice for banks on how to mitigate the many risks of generative AI, including hallucinations, data leakage and instability in model behavior. The
“There is much work needed to allow the ‘safe’ use of AI (where safety considers both the customer and the bank) and ultimately allow financial services organizations to rapidly adopt new technologies as they emerge,” the document states.
Authors of the framework also included experts from Bank of Montreal, Natwest, Fannie Mae, Microsoft, Github, Factset, Scott Logic, ControlPlane, Provectus, TestifySec, all members of the Fintech Open Source Foundation (FINOS).
Most financial services organizations have mature processes for on-boarding technology and dealing with sensitive data, noted Madhu Coimbatore, head of AI development platforms at Morgan Stanley and one of the framework authors, in an email interview.
“However, generative AI presents some new challenges,” he said. “Generative AI can introduce new risks such as bias and hallucinations, but it also creates new threats from a cyber and data security perspective. To address these new risks, we need new controls.”
Some general standards related to AI implementation already exist, such as the
“We know that many banks have already started drafting their own internal frameworks to address the specific challenges financial services organizations face when implementing generative AI,” Coimbatore said.
The AI governance framework could be a baseline banks can apply to their specific use cases, rather than having to start from scratch.
“As more firms contribute to this effort, the framework will become more comprehensive and address a large number of use cases which will improve its applicability and adoption,” Coimbatore said.
The framework is still in its draft state, but Coimbatore’s team at Morgan Stanley is already finding it useful as a set of requirements for data security, privacy, cybersecurity, AI safety and model management that have been given to employees and vendors.
“Many of these controls will need to be implemented by technology providers, so in some cases we are talking to them about implementing these controls as part of their offering,” he said.
Coimbatore’s gravest concerns about the use of generative AI in financial institutions are around data security and privacy.
“Protecting firm data and client data is of paramount importance for all banks,” he said. “With a technology like generative AI, data security can be breached through a prompt injection attack, or the loss of data from a vector database or a proprietary model.” These threats can be addressed with a defense-in-depth approach to security and by having the right data encryption and role-based access, he said.
Industry observers agree on the need for guidelines like this.
“Every bank has to confront the risks associated with generative AI,” said Dan Latimore, chief research officer at the Financial Revolutionist, which on Thursday is releasing a research report,
This project came about because FINOS’s leaders, watching the wave of generative AI deployments in banks, “saw an opportunity to help the industry more safely adopt” the technology, said Gabriele Columbro, executive director of FINOS, in an interview. FINOS’s overall mission is to accelerate collaboration and innovation in financial services through the adoption of open source software, standards and best practices.
Columbro hopes bank regulators will look at the AI readiness framework and use it to write their own standards, “because ultimately this makes life easier, both for the regulators themselves and for the regulated entities,” Columbro said. “It’s a very fragmented landscape, so any degree of consolidation and consensus that we can build is an important value for the firms.” Regulators have not been shown this document yet.
Next up, FINOS members are working to create the controls described in the framework as open source code that any developer can use, Columbro said.