Lambert says: Really “Rational maximizing agents”?
The authors are John Danielsson, director of the Systemic Risk Centre at the London School of Economics and Political Science, and Andreas Utemann, Principal Research Fellow at the Bank of Canada and Research Fellow at the Systemic Risk Centre at the London School of Economics and Political Science. Originally published on VoxEU.
Artificial intelligence could play a role in either stabilizing the financial system or increasing the frequency and severity of financial crises. In the second of a two-part series, I argue that the outcome could depend on how financial authorities engage with AI. Because private financial institutions have access to expertise, superior computational resources, and increasingly high-quality data, authorities are at a significant disadvantage. The best way for authorities to respond to AI is to develop their own AI engines, set up AI-to-AI links, implement automated standing facilities, and leverage public-private partnerships.
Artificial intelligence (AI) has the potential to increase the severity, frequency and intensity of financial crises.AI Financial Crisis“(Danielsson and Utemann 2024a)” But AI can also stabilize the financial system, depending on how authorities interact with it.
In Norvig and Russell’s (2021) classification, AI is considered a “rational maximizing agent.” This definition is consistent with typical economic analyses of financial stability. What distinguishes AI from purely statistical modeling is that it does not just use quantitative data to provide numerical advice, but applies goal-driven learning to train itself on qualitative and quantitative data to provide advice or make decisions.
One of the most important tasks of monetary authorities, especially central banks, is to prevent and contain financial crises, which is not an easy task. Systemic financial crises can be extremely damaging and cost large countries trillions of dollars. The task of macroprudential authorities is becoming more difficult as the financial system becomes ever more complex.
If authorities choose to use AI, it will be of great help as it is adept at handling huge amounts of data and complexities. AI may definitely help authorities at the micro level, but it will struggle in the macro realm.
Authorities are finding it difficult to engage with AI. They must monitor and regulate private AI while identifying systemic risks and managing crises that could spread faster and become more severe than ever before. If authorities are to remain important overseers of the financial system, they must not only regulate private AI, but also harness AI for their own missions.
Not surprisingly, many authorities have studied AI, including the IMF (Comunale and Manera 2024), the Bank for International Settlements (Aldasoro et al. 2024, Kiarelli et al. 2024), and the ECB (Moufakkir 2023, Leitner et al. 2024). However, most of the research published by authorities has focused on behavioral and microprudential concerns, rather than financial stability or crises.
Compared to the private sector, authorities are at a considerable disadvantage, and AI exacerbates this situation: private financial institutions have access to more expertise, better computational resources, and increasingly better quality data; their AI engines are protected by intellectual property and populated with proprietary data, both of which are often beyond the reach of authorities.
This disparity makes it difficult for authorities to monitor, understand, and counter the threats posed by AI. In a worst-case scenario, it could encourage market participants to pursue increasingly aggressive tactics, knowing that regulatory intervention is unlikely.
Dealing with AI: Four options
Fortunately, as we discuss in Danielson and Uteman (2024b), governments have a number of good options for dealing with AI: they can use triggered standing facilities, implement their own financial system AI, set up links between AI, or create public-private partnerships.
1. Permanent Facilities
The speed of AI’s response means that the discretionary intervention capabilities favored by central banks may come too late in a crisis.
Instead, central banks may have to implement standing facilities with pre-determined rules that allow them to react immediately to stresses. Such facilities may have the side benefit of eliminating some crises caused by the private sector coordinating on the running equilibrium. If the AI knows that the central bank will intervene if the price falls by a certain amount, the engines will not coordinate on a strategy that will only be profitable if the price falls by a certain amount. To take one example, announcements of short-term interest rates are credible because market participants know that the central bank can and will intervene. Thus, it becomes a self-fulfilling prophecy even if the central bank does not actually intervene in financial markets.
Do such automated programmed responses to stress need to be opaque to prevent game-playing and thus moral hazard? Not necessarily. Transparency helps prevent undesirable behavior. There are already many examples where well-designed, transparent facilities promote stability. If we could eliminate worst-case scenarios by preventing private-sector AIs from cooperating with them, strategic complementarities would be reduced. Also, if intervention rules could prevent bad equilibria, market participants would not need to resort to facilities in the first place, and moral hazard would be kept low. The downside is that if poorly designed, such pre-announced facilities enable game-playing and increase moral hazard.
2. Financial System AI Engine
Financial authorities could develop their own AI engines to monitor the financial system directly – assuming they can overcome the legal and political difficulties of data sharing – and leverage the vast amounts of sensitive data they have access to to gain a comprehensive view of the financial system.
3. Linking AIs
One way to leverage the agency’s AI engines is to develop an AI-to-AI communication framework, which would allow the agency’s AI engines to communicate directly with other agency and private sector AI engines. A technical requirement is to develop a communication standard (Application Programming Interface, or API), which is a set of rules and standards that allow computer systems using different technologies to communicate with each other securely.
Such a regime would bring several benefits: It would facilitate the regulation of private sector AI by allowing authorities to monitor and benchmark it directly against predefined regulatory standards and best practices; Communication links between AI would be useful for financial stability applications such as stress testing.
In the event of a crisis, supervisors of the resolution process can instruct their agency’s AI to leverage AI-to-AI links to run simulations of alternative crisis responses such as liquidity injections, forbearances and bailouts, helping regulators make more informed decisions.
If perceived as competent and trustworthy, the mere existence of such arrangements can act as a stabilizing force in times of crisis.
Authorities need to be prepared before the next stress event occurs – making the necessary investments in computers, data, human capital, and all the legal and sovereign issues that will arise.
4. Public-Private Partnerships
Authorities need access to AI engines that can match the speed and complexity of private sector AI. It is unlikely that authorities will have their own in-house designed engines, as this would require significant public investment and a restructuring of how authorities operate. Instead, public-private partnerships, already common in financial regulation such as credit risk analysis, fraud detection, anti-money laundering, and risk management, are more likely.
Such partnerships also have drawbacks. The problem of risk monoculture due to oligopolistic AI market structure would be a real concern. Moreover, it could prevent authorities from gathering information on the decision-making process. Private companies would also prefer to keep their technology exclusive and not disclose it even to authorities. However, this may not be as big a drawback as it seems. Evaluation of engines by AI vs. AI benchmarks does not require access to the underlying technology, only how it reacts in a particular case, which can be implemented by AI vs. AI API links.
Addressing the challenges
While there is no technical reason preventing authorities from building their own AI engines and implementing AI-to-AI links using current AI technologies, they face several practical challenges when implementing the above options.
The first is the issue of data and sovereignty: Authorities already struggle to access data, but the situation appears to be exacerbated by the fact that technology companies own and protect their data and measurement processes with intellectual property, and authorities are reluctant to share sensitive data with each other.
The second problem for authorities is how to deal with AI that poses undue risk. The proposed policy response is to shut down such AI using a “kill switch” similar to the one used to replace outages in flash crashes. We suspect this may not be as feasible as authorities think, since it may not be clear how the system would function if its main engine was turned off.
Conclusion
The rapid expansion of AI use in the financial system promises to make financial service delivery more robust and efficient at a much lower cost than today, but it could also pose new threats to financial stability.
Financial regulators are at a crossroads: if they are too conservative in their approach to AI, it could very well be integrated into private systems without sufficient oversight, which could result in increased intensity, frequency, and severity of financial crises.
However, increased use of AI could stabilize the system and reduce the potential damage from a financial crisis. This can happen if authorities approach AI with a proactive attitude: they can use public-private partnerships to develop their own AI engines, evaluate the systems, and use them to benchmark the AI by establishing AI-to-AI communication links. This allows them to conduct stress tests and simulate responses. Finally, the speed of the AI crisis suggests the importance of a permanent facility when it is triggered.
Author’s note: Any opinions and conclusions expressed herein are those of the authors and do not necessarily represent the views of the Bank of Canada.