Generative AI poses a significant risk to financial security by enabling the creation of misleading narratives that can incite bank runs and undermine public trust. A study indicates that misinformation can lead to substantial withdrawals, with a notable percentage of clients considering pulling their funds in response to AI-generated content. Financial institutions are urged to enhance monitoring of social media to detect and mitigate these threats, while responses from banks regarding their strategies remain limited.
The Growing Threat of Generative AI in Financial Security
Generative AI is increasingly being exploited to fabricate false narratives, such as claims that customer funds are insecure, or to create memes that trivialize serious security concerns. These deceptive materials can gain traction on social media, often bolstered by paid promotions, as highlighted by a recent study from the British research firm Say No to Disinfo and the communications agency Fenimore Harper.
Concerns Over Bank Runs and AI Manipulation
In the wake of the 2023 collapse of Silicon Valley Bank, where depositors withdrew a staggering $42 billion within just 24 hours, banks and regulatory bodies are on high alert regarding the potential for social media to incite bank runs. The G20 Financial Stability Board has raised alarms, stating that generative AI could empower malicious actors to create and spread information that could lead to severe financial crises, including rapid market declines and bank runs.
According to Say No to Disinfo, when presented with a sample of AI-generated misinformation, a significant one-third of British bank clients reported being “extremely likely” to withdraw their funds, while 27% indicated they were “somewhat likely” to do so. The report emphasizes that as AI streamlines the creation of misinformation campaigns, the associated risks to the financial sector are escalating but often go unnoticed. With online and mobile banking enabling instantaneous money transfers, the potential damage is amplified.
The study estimates that for every £10 (approximately $12.48) invested in social media advertising promoting false content, banks could see as much as £1 million in customer deposits moved as a result. This figure is derived from analyzing average deposit amounts among British customers, the typical costs of social media ads, and the reach of such campaigns.
To counter these threats, researchers suggest that financial institutions must actively monitor media and social media channels. Integrating this monitoring into withdrawal control systems could assist in quickly identifying when harmful information influences customer actions.
Woody Malouf, financial crime officer at Revolut, noted that the London fintech firm is already engaged in real-time monitoring to detect emerging threats within its client base and the broader financial ecosystem. “While we may consider such incidents unlikely, they are still possible, making it crucial for financial institutions to be prepared,” he stated, urging social media platforms to take a more proactive stance against these risks.
Responses from other financial institutions, such as NatWest and Barclays, have been limited, with many not providing comments on the study. While regulators have voiced concerns over the implications of AI for financial stability, banks maintain a generally optimistic outlook regarding the technology’s potential benefits.
According to UK Finance, “Banks are diligently working to manage and mitigate AI-related risks, while regulators are scrutinizing the challenges that this technology could pose to financial stability.” It’s important to note that the release of this report is separate from an AI summit that took place in France this week, where discussions were centered around promoting AI adoption rather than managing its risks.