Adrienne Harris, New York’s Superintendent of Banking, has some stern words for banks about vendor management and AI risk.
“It’s not going to be sufficient to say the algorithm was discriminatory, it was our vendor who wrote the algorithm, we don’t know how it works, so it’s not our fault,”
Existing third-party vendor risk management frameworks in New York and other states are well-trodden paths, she said, and they apply to newer forms of generative AI.
Last week, the NYDFS added
“It is obviously one of those risks that keeps everybody up at night,” Harris said.
“The guidance was meant to highlight best practices for our regulated entities to say, here are the new risks and threats you should be thinking about in the cybersecurity space” that are heightened by AI, such as more sophisticated social engineering attacks.
Her department is also thinking about how regulated companies use AI as part of their defenses, to identify threat vectors, risks and malware before they can do damage.
Her team is also trying to make sure regulated entities have the right expertise in house.
When the NYDFS put the AI guidance out for comment, people complained that the required testing would be too onerous. But at a previous AI insurance company she helped lead, Harris said she did that kind of testing, with the help of a diverse team.
Asked what makes her most concerned about AI, she said she’s been thinking a lot lately about
“We’ve been spending a lot of time thinking about that,” Harris said. “There’s an amazing set of opportunities there, but there will also be risks.”