Commentary

Commentary

 
 

On AI and Financial Stability

"Everybody wants progress. Nobody wants change.”
 Paul Romer, August 12, 2016

The scarcity of consistent, representative data on AI usage poses significant challenges for conducting vulnerabilities surveillance in this rapidly evolving area. Financial authorities have an even more limited view of AI usage at financial institutions that are less subject to financial regulations or outside the regulatory perimeter." Financial Stability Board, November 14, 2024.

A tidal wave of recent analysis focuses on the impact of artificial intelligence (AI).

AI is hardly new. More than a century ago, writers and filmmakers began to give robots—a humanoid form of AI—a central role in their art. Academic AI research followed in the 1950s. By the 1990s, IBM’s Deep Blue defeated the reigning world chess champion. In the 2010s, increased private funding facilitated the development of generative AI (GenAI) that creates new content and large-language models (LLMs) that process text, images, and sound. Benefiting from natural-language interaction, these new AI tools already are broadly popular. Their developers even provide smartphone apps to access them.

At an early stage, the financial industry – with its abundant data, computing power, and profit opportunities – recognized AI’s potential to lower costs and improve decision making. By the 1980s, brokers introduced automated trading on stock exchanges. By the 2000s, high-frequency traders were using machine-learning (ML) models to identify and execute trades on electronic platforms. Today, capital market firms are exploring how AI can enhance asset allocation, risk management, operations, payments, customer service, and the trading of less liquid instruments such as corporate bonds. (See Figure 1.6 here and Figure 3.1 here.) The list of existing tasks that AI might transform is virtually endless, as is its potential to reshape intermediation and markets in a wholesale fashion (see, for example, Garicano).

Unsurprisingly, AI progress over the past five years is triggering a broad new wave of interest. In sectors ranging from healthcare to transportation, many firms as well as governments hope that AI will cut costs, improve quality, broaden access, and create new goods, services and markets. Economists are debating the prospective impact on productivity and growth and the distribution of income, as well as the potential for disruption in the labor market. Security analysts are exploring AI’s uses in defense—from threat detection to advanced weaponry. Meanwhile, AI firms are rushing to create “artificial general intelligence” (AGI) that would surpass human cognition. From there, it seems like only a short step to fully autonomous robots. Nevertheless, if and when this AGI future comes to pass remains anybody’s guess.

In this post, we focus on the impact of AI on financial stability. To foreshadow our conclusions, current forms of AI are likely to amplify existing threats to financial stability. To prepare, public authorities need to adapt their tools (such as capital and liquidity requirements) to safeguard financial stability. Looking further forward, the prospect of autonomous “AI agents” – which can gather and assess information as well as make and implement decisions – means that the day could soon come when timely human intervention to protect the financial system will no longer be feasible. Instead, only agents acting at speed (and with information) at least comparable to those of private-sector AIs will be able to keep finance safe. To address these challenges regulators and supervisors need to invest heavily in AI—hardware, software and skilled personnel. Put simply, only public AI will be able to mitigate the risks that private AI creates.

Innovation and finance: progress with risks. We start by highlighting the critical role of technology—both software and hardware—in the evolution of finance over the centuries (see Table 1). By virtually every measure – volume, quality, scope, inclusiveness – technological innovation is the key to the extraordinary progress in financial services. Today, it is impossible to imagine modern finance without a combination of low-cost, widespread, rapid communications and enormous computing power. As one stunning example of recent innovation, consider the introduction of universal biometric IDs in India: combined with government support for no-frills bank accounts, the advent of low-cost, reliable digital IDs resulted in more than 500 million new accounts over the past decade.

Table 1. Notable developments in finance and technology over the centuries

Note: Dates of initial invention are in parentheses.

Modern finance and the societies where it thrives clearly benefit from the effects of technological progress. Innovation reduces costs, makes payments and settlement faster and safer, broadens the range of liquid, tradeable assets, helps firms and investors to price and manage risk, speeds the detection of suspicious payments, and facilitates regulatory compliance. From the perspective of regulators and supervisors, technology enables the monitoring of vulnerabilities throughout the financial system. The key to all these advances is the accumulation, management and analysis of large quantities of high-quality data, combined with the widespread use of high-speed computing and communications.

At the same time, innovation creates the potential for financial instability. The most obvious examples involve the heightened volatility – and panics – that can result from technologically-driven herding in capital markets. Perhaps the most well-known case is the October 1987 U.S. stock market crash,  triggered by an automated trading strategy (“portfolio insurance”) that led to panic selling. Similarly, the brief “flash crash” of 2010 resulted from a cascade of algorithmically-driven automated executions. To provide just one very different example of an innovation-driven financial disruption, over-the counter derivatives helped conceal risk-taking that boosted the vulnerabilities of the financial system ahead of the crisis of 2007-2009.

Current AI: an amplifier. Against this background, current forms of AI probably will amplify the effects of technology on financial stability—both for the better and for the worse. On the upside, we expect improved risk management, faster transactions settlement, lower costs of regulatory compliance, and enhanced means for supervisory oversight. In addition, markets with thousands of highly idiosyncratic, credit-sensitive instruments likely will become more liquid, eventually giving trading firms a larger buffer against liquidity shocks while broadening access.

On the downside, uniformity among AI modeling and data sources can amplify herding, a key source of capital market volatility. Moreover, the black-box nature of AI makes it difficult to interpret and explain AI behavior, let alone design and implement remedial actions to address AI-mishaps that will occur at light speed. Another risk of instability arises from AI’s high fixed costs. The billions of dollars it takes to develop, train and maintain a foundation model like ChatGPT, Claude, or Gemini, limits the number of third-party AI providers (just as it does in the world of cloud storage). This concentration problem – with most financial institutions and market participants dependent on a small number of critical AI vendors – is analogous to the reliance on financial clearinghouses for which there are no short-run substitutes. As dynamic, learning algorithms seek better ways to manage risk and increase returns, the spread of AI in finance also is likely to increase connections across assets, markets and geographies. New and strengthened interconnections will amplify the spillover of adverse shocks both across markets and jurisdictions.

Addressing risks from current AIs. Addressing these varied risks to financial stability requires that regulators and supervisors modify their existing tools. Three examples illuminate the point. First, regulators currently require that exchanges impose circuit breakers to limit herding risks from high-speed algorithmic trading. Are the current circuit breakers sufficient to contain AI-driven herding in the growing set of liquid markets that are attracting high-frequency traders? Second, as trading quickens, the speed with which authorities need to observe and respond to disruptions likely will pick up. For regulators to preempt runs, their backup liquidity facilities must be widely available and truly automatic. Third, to the extent that AI increases financial market volatility, authorities will need to further strengthen capital and liquid requirements to ensure that the financial system remains resilient in the face of bigger shocks.

In short, the authorities need to step up their game. They need infrastructure and skills that allow them to engage in comprehensive monitoring of the entire financial system. In the world of AI, this means having tools and people that approach those in the private sector. The authorities need computer scientists with expertise in deep learning and neural networks. They need the physical resources to exploit big data – both structured (traditional databases) and unstructured (audio, video, photos, survey responses, news feeds, high-frequency market feeds, social media, live chat, and the like). The goal is to have the capacity to assess the exposures of financial service providers and those who use them not only to each other’s balance sheets (a traditional objective), but also to their uses of AI. In a broader sense, they will need to be able to map the evolving connections in the financial system in something that approaches real time.

Addressing future AI risks: the hard part. As far as we can tell, the authorities are far behind the private sector in developing and using AI effectively. Yet, narrowing this gap is almost certainly the easy part. The hard part is addressing a future world in which AI agents gain autonomy – that is, the ability in the absence of human intervention, to “analyze data, write code to create other agents, trial-run it, update it as they see fit, and so on” (see Aldaroso et. al.). Even if developers of financial AI agents have benign intentions, they are simply not capable of foreseeing how this autonomous code will evolve, and whether it will successfully protect key social goals other than making profit. Can anyone ensure that AI agents will always uphold the rule of law? What (or who) will be held responsible if they fail to do so?

How will public authorities respond to this “superhuman” challenge?

Unfortunately, analogs to the prevailing solutions for nuclear weapons – involving both non-proliferation agreements and deterrence – are unlikely to work. AI proliferation is happening already, and malicious agents (especially those operating out of – and possibly supported by – hostile foreign states) may be impossible to deter.

One possibility is for authorities to develop something analogous to the missile shield envisioned in Ronald Reagan’s Strategic Defense Initiative (and now realized in Israel’s varied air defense systems). Doing this requires that the official sector compete effectively in the arms race, developing AI counteragents to protect the financial system from the private sector’s potentially misguided (or malicious) AI agents. Importantly, these actions to safeguard the system probably will have to occur at a pace that is beyond what humans can observe, let alone implement. And, given the incentives of private AI agents to conceal their goals and actions, it is at best an open question whether public AI counteragents can secure multiple critical objectives—again, not just financial stability but the rule of law.

Of course, superhuman AI challenges are hardly limited to finance. Philosophers and computer scientists have long worried about existential risks that may come from “AI takeover.” Yet, there is no consensus about if or when such a “singularity” will occur, let alone about how humanity should act to prevent or postpone it. We certainly don’t claim to have a compelling answer.

However, it would not surprise us if authorities eventually came to think of managing super-intelligent AI challenges in ways that science fiction writers imagined. Despite its well-known shortcomings, our favorite example is Asimov’s three laws of robots, first introduced in a 1942 short story. Translated into “laws” of AI in finance, these might read:

  1. “A financial AI must never harm the financial system or allow it to be harmed through inaction;

  2. A financial AI must obey human orders, except when it would conflict with the First Law, and;

  3. A financial AI must protect its own existence, except when it would conflict with the First and Second Laws.”

Conclusion. Fortunately, to make progress in promoting the responsible use of AI, financial authorities do not need an immediate answer to such existential questions. What they do need is to develop AI tools to map the financial system and to detect and monitor its vulnerabilities. They also need to adjust their traditional remedies to counter intensified risks from AI. Only then will they be able to keep pace with the current (and very near-term) applications of AI in the private sector, and with their potential adverse spillovers to the financial system. Only then will they also be preparing for the far-more uncertain world of autonomous AI.