With the race to embed generative AI into financial products, banks and financial firms are challenged to explain its capabilities to customers. They want to tout the wins — faster service, a conversational chatbot experience and quality account analytics, for example — but they also risk overstating them. And that could be a problem for regulators.
AI washing — instances where companies overstate what AI can do, or aren’t clear about how or why it’s being used — is a risk regulators are evaluating, panelists said at the Financial Industry Regulatory Authority’s advertising regulation conference last month in Washington, D.C.
“We have heard already about one ethical concern from the Securities and Exchange Commission regarding so-called AI washing, where they have felt that financial services industry participants … have gone overboard in terms of bandwagoning on the AI front,” including misleading or false statements about the degree to which AI is being used to manage portfolios, and whether investments are truly AI-driven or simply AI-adjacent, said Amy Sochard, a vice president at FINRA.
The growing number of companies mentioning AI in their earnings reports might raise alarms about possible AI washing: Research that FactSet
SEC Chairperson Gary Gensler warned of the consequences of AI washing in a
“Such AI washing, whether it’s by companies raising money or by financial intermediaries … may violate the securities laws,” Gensler said. In SEC cases filed this year against companies accused of AI washing, the regulator
Keeping expectations in check
Characterizing AI washing as a type of “flavor of the month” marketing, Sochard noted that FINRA seeks to evaluate the accuracy of marketing claims.
“If there’s a prospectus, we’ll look at that, or we’ll ask the firm for more information…whether it’s a service of the firm or a product they’re selling, how it operates and what really is behind the scenes,” she said.
Philip Shaikun, vice president and associate general counsel at FINRA, confirmed that all FINRA rules and federal securities laws will continue to apply to generative AI, as outlined in its regulatory
Handling hallucination and disclosure risks
Given the risks AI models will hallucinate and offer incorrect answers, firms need to ensure humans are properly overseeing and reviewing content, panelists said.
“A human must be in the loop,” said Brad Ahrens, senior vice president for advanced analytics at FINRA. “The model will hallucinate. They will give you wrong answers.” According to
Companies also need to look at AI used by vendors for possible AI washing claims. This might mean a review of claims relating to generative AI use for accuracy, and examining whether new AI capabilities are turned on by default, Ahrens said.
A key hurdle with navigating AI washing concerns is the lack of a uniform definition among companies. Ahrens took a broad view of how AI is defined, sticking with the tried and true “using computers to make predictions.”
Generative AI use cases
Looking to the future, panelists offered mixed views on the technology’s opportunities and risks. Ahrens said he thought the pace of innovation on AI is poised to accelerate, with an increasing emphasis on
Generative AI, by taking care of some rote tasks, could potentially enhance the efficiency of compliance tools and free up humans to tackle “the highest areas of risk,” Shaikun said. Consumer-facing use cases will grow as they become more confident in efforts to curb hallucinations, he noted.
Other panelists highlighted emerging risks. Shakun said investors — particularly early entrants to the market — could become overly reliant on generative AI for investment advice.
Others pointed to negative consequences that may arise from modeling based on human behavior characteristics.
“AI systems have been able and proven in research to model behaviors and symptoms associated with depression, anxiety,” said Rachel Chudoba, senior strategist of planning and research at McCann Worldgroup. An AI system can “deploy more ads to this person in this time frame where they are more likely to impulse buy,” underscoring the need for ethical frameworks that regulate AI systems.