Yves: We actually had a big AI economic crisis, but it happened long before the name became popular. Algorithm-driven trading is an implementation of AI, especially the black box kind.
The 1987 stock market crash was caused by the large-scale introduction of automated selling, something called portfolio insurance, so from the very beginning we knew that computer-run trading strategies could wreak havoc.
Article by John Danielsson, Director of the Systemic Risk Centre at the London School of Economics and Political Science, and Andreas Utemann, Principal Research Fellow at the Bank of Canada and Research Fellow at the Systemic Risk Centre at the London School of Economics and Political Science. Vox EU
The rapid adoption of artificial intelligence is transforming the financial industry. In this first of a two-part series, I argue that AI can either increase risk to the financial system as a whole or stabilize it, depending on endogenous responses, strategic complementarities, the severity of the events faced, and given objectives. Future crises are likely to be more intense than previous ones, as AI can master complexity and respond quickly to shocks.
The growing use of artificial intelligence (AI) in both the private and public financial sectors has the potential to cause more frequent and intense financial crises, as AI processes information much faster than humans can, but it can also have the opposite effect and help stabilize the system.
In the classification of Norvig and Russell (2021), AI is considered as a “rational maximizing agent”. This definition is consistent with typical economic analysis of financial stability. What distinguishes AI from pure statistical modeling is that it does not just use quantitative data to provide numerical advice, but applies goal-driven learning to train itself on qualitative and quantitative data, and therefore can provide advice or even make decisions.
The extent of AI use in the financial services industry is difficult to gauge. Financial Times reports that only 6% of banks plan to use AI substantially, due to concerns about credibility, job loss, regulatory aspects, and inertia. Some surveys agree, others don’t. The financial industry is competitive, and while newer financial institutions and some of the larger banks are using modern technology stacks and hiring AI-savvy staff to significantly improve costs and efficiency, more conservative institutions will likely have no choice but to follow suit.
The rapid adoption of AI has the potential to make the delivery of financial services more efficient and less costly. Many of us would benefit.
But it’s not all good. There are widespread concerns about the impact of AI on labor markets, productivity, and more (Albanesi et al. 2023; Filippucci et al. 2024). We are particularly concerned about how AI might affect the likelihood of a systemic financial crisis – a disruptive event that could cause trillions of dollars of damage to large economies and upend society. This has been the focus of our recent work (Danielsson and Uthemann 2024).
The roots of financial uncertainty
We speculate that AI does not create new underlying causes of crises but rather amplifies existing ones: excessive leverage that makes financial institutions vulnerable to even small shocks, self-preservation during crises that drives market participants to favor the most liquid assets, and system opacity, complexity, and asymmetric information that lead market participants to distrust each other in times of stress. These three fundamental vulnerabilities have been behind nearly every financial crisis in the past 261 years, since the first modern financial crisis in 1763 (Danielsson 2022).
However, preventing and containing crises is not easy, because although all crises have the same three fundamental factors as their causes, they are very different. And that is not surprising: for financial regulation to be effective, it must first prevent crises. Thus, it is almost self-evident that crises occur off the radar of authorities. Because the financial system is infinitely complex, there are many areas where risks can accumulate.
The key to understanding financial crises is how financial institutions optimize. Financial institutions aim to maximize profits given acceptable risk. Translated into business behavior, the Roy (1952) criterion is useful: maximize profits provided they do not go bankrupt. That means that financial institutions maximize profits most of the time, maybe 999 days out of 1,000. But on the last day, when the system is in chaos and the crisis is looming, the main concern of financial institutions is not profits but survival. In other words, it’s a “one day out of 1,000” problem.
When financial institutions prioritize survival, their behavior changes abruptly and dramatically. They hoard liquidity and choose the safest, most liquid assets, such as central bank reserves. This leads to bank runs, fire sales, credit crunching, and other undesirable behaviors associated with crises. There is nothing wrong with such behavior, but it cannot be easily regulated.
When AI gets involved
These factors that cause financial instability are well understood and have been a concern long before the advent of computers. As technology has been increasingly introduced into the financial system, it has brought increased efficiencies that have benefited the system, but at the same time has amplified existing sources of instability. AI is expected to have a similar effect.
To identify how this might happen, it is useful to consider the societal risks arising from the use of AI (e.g., Weidinger et al. 2022; Bengio et al. 2023; Shevlane et al. 2023) and how they interact with financial stability. In doing so, we identify four pathways through which economies might become vulnerable to AI:
- of Misinformation channel This happens because users of AI don’t understand AI’s limitations and become increasingly reliant on it.
- of Malicious Usage Channels This occurs because the system is filled with resource-rich economic actors who want to maximise profits but don’t care much about the social impacts of their activities.
- of Deviation Channel This arises from the difficulty of ensuring that AI follows the objectives desired by human operators.
- of Oligopoly Market Structure Channel It stems from the business models of the companies that design and run AI engines: these companies enjoy economies of scale that can hinder market entry, increase homogeneity and risk monoculture.
How AI destabilizes the system
To work effectively, AI needs data, even more than humans do. This isn’t a problem because systems generate terabytes of data every day. The problem is that almost all of that data comes from the middle of the distribution of system outcomes, not the tails. The crisis is all about the tails.
There are four reasons why there is so little data from the extremities.
The first is the endogenous reaction to control by market participants. This is related to AI misinformation channels. A useful way to understand this is the Lucas (1976) Criticism and Goodhart’s (1974) Law: “Observed statistical regularities tend to break down when pressure is applied for control purposes.” Market participants do not just calmly accept regulation. They respond strategically. They do not tell anyone in advance how they will respond to regulation or stress. They probably do not know. Thus, the reaction function of market participants is hidden. And what is hidden does not exist in the dataset.
The second reason, which stems from the perverse channel, is strategic complementarity, which is at the heart of how market participants behave during crises: they feel compelled to withdraw liquidity because their competitors are withdrawing it. On the other hand, strategic complementarity can lead to multiple equilibria, where random chance can lead to very different market outcomes. Both of these consequences of strategic complementarity mean that observing past crises is not as informative for future crises, which is another reason why we don’t have many observations from the tail.
At the root of the problem are two properties of AI: AI is good at extracting complex patterns from data, and it quickly learns from the environment in which it operates. Today’s AI engines observe what their competitors do, and it’s not hard for them to use those observations to improve their own models of how the world works. In practice, this means that future AIs in private and public organizations will train, optimize, and influence each other.
Aligning AI incentives with those of owners is a difficult problem. It is a path of misalignment. The situation may be exacerbated during a crisis, when speed matters and the AI may not have time to elicit human feedback and fine-tune its goals. Traditional ways in which systems work to prevent coalition equilibria may no longer work. If human regulators can no longer coordinate rescue efforts and “twist arms,” the ever-present problem of misalignment between individual rational behavior and socially desirable outcomes may be exacerbated. Before human owners can pick up the phone and answer the call for the Fed Chairman, the AI may already have liquidated its positions and caused the crisis.
AI will likely exacerbate the path of oligopolistic market structures that lead to financial instability. This is further reinforced by the oligopolistic nature of the AI analytics business. As financial institutions become increasingly similar in the way they see and react to the world, buying and selling becomes coordinated, leading to bubbles and crashes. More generally, risk monocultures are key drivers of booms and busts in the financial system. Machine learning design, input data, and computing affect the risk management capabilities of AI engines. These are increasingly dominated primarily by a small number of technology and information companies, which continue to merge, leading to oligopolistic markets.
A key concern with this market concentration is that many financial institutions, including public sector institutions, may get their worldview from the same vendors. This means they perceive opportunities and risks similarly, including how they might be affected by current or hypothetical stresses. In times of crisis, this homogenizing effect of using AI can reduce strategic uncertainty and facilitate alignment on execution equilibrium.
Given the recent wave of data vendor mergers, it is concerning that neither competition authorities nor financial authorities appear to fully appreciate the potential for increased systemic risks that could arise from an oligopoly in AI technologies.
summary
When an organization faces an existential threat, AI optimizes for survival. But this is where AI’s speed and efficiency work against the system. When other financial institutions do the same, they coordinate into the crisis equation, meaning they all collectively make the same decision and therefore influence each other. All institutions will try to respond as quickly as possible, because the first ones to liquidate risky assets will be best placed to weather the storm.
The result is increased uncertainty, which leads to extreme market volatility and vicious feedback loops like fire selling, liquidity withdrawals, bank runs, etc. Thanks to AI, stresses that could take days or weeks to unfold can happen in minutes or hours.
An AI engine might also do the opposite. After all, just because an AI can react faster doesn’t mean it will. Empirical evidence shows that during crises, asset prices may fall below their fundamental value, but they often recover quickly, meaning there is a buying opportunity. If the AI doesn’t care so much about survival, and the engine collectively converges to a recovery equilibrium, it will absorb the shock and the crisis won’t happen.
Taken together, AI appears to act to reduce volatility and thicken the tails. It may smooth out short-term fluctuations at the expense of more extreme events.
Of particular importance is the extent to which financial authorities are prepared for an AI crisis, which we will discuss in our VoxEU article next week, “How can financial authorities respond to AI threats to financial stability?”
Author’s note: Any opinions and conclusions expressed herein are those of the authors and do not necessarily represent the views of the Bank of Canada.
look Original Post References