While artificial intelligence offers substantial benefits to society, including accelerated scientific progress, improved economic growth, better decision making and risk management and enhanced healthcare, it also generates significant concerns regarding risks to the financial system and society. This column discusses how AI can interact with the main sources of systemic risk. The authors then propose a mix of competition and consumer protection policies, complemented by adjustments to prudential regulation and supervision to address these vulnerabilities.
In recent months we have observed sizeable corporate investment in developing large-scale models – those where training requires more than 1023 floating-point operations – such as OpenAI’s ChatGPT, Anthropic’s Claude, Microsoft’s Copilot and Google’s Gemini. While OpenAI does not publish exact numbers, recent reports suggest ChatGPT has roughly 800 million active weekly users. Figure 1 shows the sharp increase in the release of large-scale AI systems since 2020. The fact that people find these tools intuitive to use is surely one reason for their speedy widespread adoption. In part due to the seamless inclusion of these tools in existing day-to-day platforms, companies are working to integrate AI tools into their processes.
Figure 1 Number of large-scale AI systems released per year
A growing literature examines the implications for financial stability of AI’s rapid development and widespread adoption (see, among others, Financial Stability Board 2024, Aldasoro et al. 2024, Daníelsson and Uthemann 2024, Videgaray et al. 2024, Daníelsson 2025, and Foucault et al. 2025). In a recent report of the Advisory Scientific Committee of the European Systemic Risk Board (Cecchetti et al. 2025), we discuss how the properties of AI can interact with the various sources of systemic risk. Identifying related market failures and externalities, we then consider the implications for financial regulatory policy.
Artificial intelligence – encompassing both advanced machine-learning models and, more recently, developed large language models – can solve large-scale problems quickly and change how we allocate resources. General uses of AI include knowledge-intensive tasks such as (i) aiding decision making, (ii) simulating large networks, (iii) summarising large bodies of information, (iv) solving complex optimisation problems, and (v) drafting text. There are numerous channels through which AI can create productivity gains, including automation (or deepening existing automation), helping humans complete tasks more quickly and efficiently, and allowing us to complete new tasks (some of which have not yet been imagined). However, current estimates of the overall productivity impact of AI tend to be quite low. In a detailed study of the US economy, Acemoglu (2024) estimates the impact on total factor productivity (TFP) to be in the range of 0.05% to 0.06% per year over the next decade. Since TFP grew on average about 0.9% per year in the US over the past quarter century, this is a very modest improvement.
Estimates suggest a diverse impact across the labour market. For example, Gmyrek et al. (2023) analyse 436 occupations and identify four groups: those least likely to be impacted by AI (mainly composed of manual and unskilled workers), those where AI will augment and complement tasks (occupations such as photographers, primary school teachers or pharmacists), those where it is difficult to predict (amongst others financial advisors, financial analysts and journalists), and those most likely to be replaced by AI (including accounting clerks, word processing operators and bank tellers). Using detailed data, the authors conclude that 24% of clerical tasks are highly exposed to AI, with an additional 58% having medium exposure. For other occupations, they conclude that roughly one-quarter are medium-exposed.
Our report emphasises that AI’s ability to process immense quantities of unstructured data and interact naturally with users allows it to both complement and substitute for human tasks. However, using these tools comes with risks. These include difficulty in detecting AI errors, decisions based on biased results because of the nature of training data, overreliance resulting from excessive trust, and challenges in overseeing systems that may be difficult to monitor.
As with all uses of technology, the issue is not AI itself, but how both firms and individuals choose to develop and use it. In the financial sector, uses of AI by investors and intermediaries can generate externalities and spillovers.
With this in mind, we examine how AI might amplify or alter existing systemic risks in finance, as well as how it might create new ones. We consider five categories of systemic financial risks: liquidity mismatches, common exposures, interconnectedness, lack of substitutability, and leverage. As shown in Table 1, AI’s features that can exacerbate these risks include:
Table 1 How current and potential features of AI can amplify or create systemic risk
Capabilities we have not yet seen, such as the creation of a self-aware AI or complete human reliance on AI, could further amplify these risks and create additional challenges arising from a loss of human control and extreme societal dependency. For the time being, these remain hypothetical.
In response to these systemic risks and associated market failures (fixed cost and network effects, information asymmetries, bounded rationality), we believe it is important to engage in a review of competition and consumer protection policies, and macroprudential policies. Regarding the latter, key policy proposals include:
In every case, it is important that authorities engage in the analysis required to obtain a clearer picture of the impact and channels of influence of AI, as well as the extent of its use in the financial sector.
In the current geopolitical environment, the stakes are particularly high. Should authorities fail to keep up with the use of AI in finance, they would no longer be able to monitor emerging sources of system risk. The result will be more frequent bouts of financial stress that require costly public sector intervention. Finally, we should emphasise that the global nature of AI makes it important that governments cooperate in developing international standards to avoid actions in one jurisdiction creating fragilities in others.
Source: cepr.org
Recent tariff increases have sparked a debate over whether trade protectionism can effectively attract foreign…
China's industrial policy has become a central flashpoint in global trade debates, yet systematic evidence…
Financial globalisation has been a central feature of the world economy since the mid 1990s,…
Several central banks, including the European Central Bank (ECB), are pursuing plans to potentially introduce…
History provides examples of large changes in the relative economic size of nations. Often these…
Globalisation has been one of the key economic forces of the past half-century. It is…