AI and systemic risk

While artificial intelligence offers considerable benefits to society, including accelerating scientific progress, improving economic growth, optimizing decision-making and risk management, and strengthening healthcare, it also raises significant concerns about the risks it poses to the financial system and society. This article examines how AI can interact with key sources of systemic risk. The authors then propose a set of measures combining competition and consumer protection policies, complemented by adjustments to prudential regulation and supervision, to address these vulnerabilities.

Over the past few months, we’ve seen significant investment from companies in developing large-scale models—those requiring more than 10²³ floating-point operations to train—such as OpenAI’s ChatGPT, Anthropic’s Claude, Microsoft’s Copilot, and Google’s Gemini. While OpenAI doesn’t publish exact figures, recent reports suggest that ChatGPT has around 800 million weekly active users. Figure 1 illustrates the sharp increase in the number of large-scale AI systems commercialized since 2020. The fact that these tools are intuitive to use is certainly one reason for their rapid and widespread adoption. Partly due to the seamless integration of these tools into existing everyday platforms, companies are striving to incorporate AI tools into their processes.

Figure 1 Number of large-scale AI systems launched each year

Notes : Data for 2025 up to August 24. The white box in the 2025 bar is the result of extrapolating the data to that date for the entire year.
Source : World in Data.

A growing body of research is examining the implications of the rapid development and widespread adoption of AI for financial stability (see, among others, Financial Stability Board 2024, Aldasoro et al. 2024, Daníelsson and Uthemann 2024, Videgaray et al. 2024, Daníelsson 2025, and Foucault et al. 2025). In a recent report by the Scientific Advisory Committee of the European Systemic Risk Board (Cecchetti et al. 2025), we examine how the properties of AI can interact with different sources of systemic risk. After identifying market failures and their associated externalities, we consider the implications for financial regulatory policy.

The development of AI in our societies

Artificial intelligence, which encompasses both advanced machine learning models and, more recently, the development of large-scale linguistic models, can rapidly solve large-scale problems and change how we allocate resources. General uses of AI include knowledge-intensive tasks such as (i) decision support, (ii) large network simulation, (iii) information synthesis, (iv) solving complex optimization problems, and (v) text writing. AI can generate productivity gains through many channels, including automation (or deepening existing automation), helping humans complete tasks more quickly and efficiently, and enabling us to perform new tasks (some of which have not yet been imagined). However, current estimates of AI’s overall impact on productivity tend to be quite low. In a detailed study of the US economy, Acemoglu (2024) estimates that the impact on total factor productivity (TFP) will be between 0.05% and 0.06% per year over the next decade. Given that TFP has increased on average by about 0.9% per year in the United States over the last quarter century, this is a very modest improvement.

Estimates suggest a diverse impact on the labor market. For example, Gmyrek et al. (2023) analyze 436 occupations and identify four groups: those least likely to be affected by AI (primarily manual and unskilled workers), those where AI will augment and complement tasks (occupations such as photographers, primary school teachers, or pharmacists), those where prediction is difficult (including financial advisors, financial analysts, and journalists), and those most likely to be replaced by AI (including bookkeepers, word processors, and bank tellers). Using detailed data, the authors conclude that 24% of administrative tasks are highly exposed to AI, and another 58% are moderately exposed. For the remaining occupations, they conclude that approximately one-quarter are moderately exposed.

AI and the sources of systemic risk

Our report highlights that AI’s ability to process vast amounts of unstructured data and interact naturally with users allows it to both complement and replace human tasks. However, the use of these tools carries risks. These include the difficulty in detecting AI errors, decisions based on biased results due to the nature of the training data, over-reliance resulting from excessive trust, and difficulties in monitoring systems that can be challenging to control.

As with all uses of technology, the problem lies not in AI itself, but in how businesses and individuals choose to develop and use it. In the financial sector, the use of AI by investors and intermediaries can generate externalities and spillover effects.

In this context, we examine how AI could amplify or modify existing systemic risks in the financial sector, as well as how it could create new ones. We consider five categories of systemic financial risks: liquidity imbalances, common exposures, interconnectedness, lack of substitutability, and leverage. As shown in Table 1, AI characteristics that could exacerbate these risks include:

  • The challenges in surveillance, where the complexity of AI systems makes effective monitoring difficult for both users and authorities.
  • Concentration and barriers to entry mean that a small number of AI providers create unique points of failure and extensive interconnectedness.
  • The uniformity of models, in which the widespread use of similar AI models can lead to correlated exposures and amplify market reactions.
  • Over-reliance and over-trust occur when superior initial performance leads people to place excessive trust in AI, increasing risk-taking and hindering oversight.
  • The speed of transactions, reactions and increased automation, which can amplify procyclicality and make it more difficult to stop self-perpetuating negative dynamics.
  • Opacity and concealment , the complexity of AI can reduce transparency and facilitate the intentional concealment of information.
  • Malicious uses, where AI can enhance the ability of malicious actors to commit fraud, cyberattacks, and market manipulation.
  • Hallucinations and disinformation, where AI can generate false or misleading information, leading to largely ill-informed decisions and subsequent market instability.
  • Historical constraints : AI’s reliance on past data makes it unable to cope with unforeseen “extreme events,” which can lead to excessive risk-taking.
  • Untested legal status , in which the ambiguity surrounding the legal responsibility for AI actions (e.g., the right to use data for training purposes and the liability for advice provided) may present systemic risks if providers or financial institutions face legal setbacks related to AI.
  • The complexity makes the system impenetrable, so it is difficult to understand the AI’s decision-making processes, which can then trigger rushes when users discover flaws or unexpected behavior.

Table 1 How current and potential characteristics of AI can amplify or create systemic risk

Notes : Existing AI features are highlighted in red if they contribute to four or more sources of systemic risk, and in orange if they contribute to three. Potential AI features are highlighted in orange to indicate that their occurrence in the future is uncertain. In the columns, sources of systemic risk are highlighted in red when they relate to ten or more AI features, and in orange when they relate to more than six but fewer than ten AI features.
Source : Cecchetti et al. (2025).

Capabilities we have yet to observe, such as the creation of self-aware AI or total human dependence on AI, could amplify these risks and create additional challenges related to the loss of human control and extreme social dependence. For now, these remain hypothetical.

Political response

In response to these systemic risks and the associated market failures (fixed costs and network effects, information asymmetries, bounded rationality), we believe it is important to review competition and consumer protection policies, as well as macroprudential policies. With regard to the latter, the main policy proposals include:

  • Regulatory adjustments such as recalibrating capital and liquidity requirements, strengthening trade suspension mechanisms, amending regulations relating to insider trading and other types of market abuse, and adjusting central bank liquidity facilities.
  • Transparency requirements , including the addition of labels on financial products to increase transparency on the use of AI .
  • Requirements regarding “financial involvement” and “level of sophistication” so that AI providers and users bear an appropriate level of risk.
  • Improvements in monitoring aimed at ensuring adequate IT and human resources for supervisory authorities, strengthening analytical capabilities, enhancing monitoring and regulatory enforcement, and promoting cross-border cooperation.

In any case, it is important that the authorities carry out the necessary analysis to obtain a clearer picture of the impact and channels of influence of AI, as well as the extent of its use in the financial sector.

In the current geopolitical context, the stakes are particularly high. If authorities fail to keep pace with the use of AI in the financial sector, they will no longer be able to monitor new sources of systemic risk. This will result in more frequent financial crises requiring costly public sector intervention. Finally, it is important to emphasize that the global nature of AI makes cooperation among governments crucial in developing international standards to prevent measures taken in one jurisdiction from creating vulnerabilities in others.

Editor’s note: This article is an automatic translation. The original is available here:  AI and systemic risk .

Is there a problem with this translation? Please email  editors@voxeu.org .

References

Acemoglu, D (2024), “ The simple macroeconomics of AI ”, NBER Working Paper No 32487.

Aldasoro, I, L Gambacorta, A Korinek, V Shreeti and M Stein (2024), “ Intelligent financial system: how AI is transforming finance ”, BIS Working Paper No 1194.

Cecchetti, S, RL Lumsdaine, T Peltonen and A Sánchez Serrano (2025),  Artificial intelligence and systemic risk , Report of the ESRB Advisory Scientific Committee No. 16.

Danielsson, J (2025), “ Artificial intelligence and stability ,” VoxEU.org, 6 February.

Danielsson, J and A Uthemann (2024), “ Artificial intelligence and financial crises ”, working paper.

Financial Stability Board (2024), “ The financial stability implications of Artificial Intelligence ”, November.

Foucault, T, L Gambacorta, W Jiang and X Vives (2025),  Artificial Intelligence in Finance , The Future of Banking 7, CEPR Press.

Gmyrek, P, J Berg and D Bescond (2023), “ Generative AI and jobs: A global analysis of potential effects on job quantity and quality ”, ILO Working Paper No 96.

Videgaray, L, P Aghion, B Caputo, T Forrest, A Korinek, K Langenbucher, H Miyamoto and M Wooldridge (2024),  Artificial Intelligence and economic and financial policymaking , A High-Level Panel of Experts’ Report to the G7, December.

Source :cepr.org

Global Excellence Chronicle Magazine

Recent Posts

The paradox of perfect supervision

Each financial crisis brings more financial supervision, more models and larger buffers – but still…

23 minutes ago

Reforming international taxation: Balancing profit shifting and investment responses

The global minimum tax represents the most ambitious international effort in decades to curb profit…

5 days ago

The economic value nations create for others relative to domestic gains

While globalisation has increased productivity and incomes around the world, extensively linked value chains have…

5 days ago

The granular origins of inflation

Textbook monetary economics views inflation as fundamentally driven by aggregate shocks, such as money supply…

5 days ago

How financial authorities best respond to AI challenges

Artificial intelligence differs from other technological advancements in finance, such as the initial adoption of…

1 week ago

The (un)intended consequences of export restrictions

Industrial raw materials such as nickel, cobalt, and rare earths are critical inputs in countless…

1 week ago