Geopolitics is no exception to the rapid spread of AI through the fields of human endeavour. As elsewhere, the rapid emergence of AI into the popular narrative has led to the commonly held belief that AI’s technical capabilities, and therefore its transformational potential, are somewhere between vast and unlimited. However, like any category of technology, AI has limits and understanding where those limits lie is crucial to understanding how it might shape global affairs in the near future.
First, it is important to draw a distinction between AI’s indirect and direct effects on geopolitical competition. The indirect effects include, for example, its impact on economic growth, productivity, social stability, and social cohesiveness. Direct effects constitute the use of AI for geopolitical and geostrategic ends—most obviously, military applications, but also more broadly for decision support within national decisionmaking structures and political leadership.
Perhaps unsurprisingly, predictions about both categories vary widely. A study by the McKinsey Institute found that AI had the potential to add $13 trillion worth of economic activity (or about 16 percent of cumulative global GDP) by 2030; a similar study (PDF) by PwC was even more optimistic, putting potential growth at nearly $16 trillion. Naturally, these prospective benefits are not likely to be evenly distributed—the PwC study, for example, posits that the AI-driven productivity difference between China and Latin America could be as high as a 9.4-to-1 advantage in favour of China. Taken at face value, this suggests that existing economic growth disparities (and the geopolitical power imbalances that grow from them) are likely to be exacerbated.
Like any category of technology, AI has limits and understanding where those limits lie is crucial to understanding how it might shape global affairs in the near future.
This, of course, is not the whole story. The rapid adoption of AI—and particularly its application to geopolitics—is not simply an economic story told in rosy growth projections. Even leaving aside its other impacts, the explosive growth in AI investment could lead to two problematic scenarios: either AI is as profoundly and rapidly transformative as its most enthusiastic backers suggest, in which case it may produce unpredictable social disruptions with security consequences; or it is not, in which case the enormous investments made by companies and governments may produce a bubble whose bursting could similarly result in unpredictable and potentially destabilising consequences.
The direct effects of AI are certainly more evident now than five years ago. Part of this reflects the rapid emergence of consumer and professional AI in the 2022–23 timeframe, which transformed AI from a mere bullet point on a list of future technologies to the primary subject of discussions around emerging technology. Yet its limitations are as important to consider as its possibilities. AI—an arguably over-broad term for a group of related technologies and concepts—is, at a fundamental level, different from human intelligence; it is organised and functions differently, and the apparent ‘cleverness’ of the AI tools that ordinary consumers can now access is more a function of design than of any fundamental similarity to human cognition.
That is not to suggest that AI lacks serious potential—far from it. In research contexts (for example, improving the development and iteration of computer code, medical scanning, or the identification of astronomical or environmental phenomena), machine learning, with its ability to process huge amounts of data and recognise patterns, may in fact be transformative. However, these relatively unglamorous (if important in the long term) use cases are not the primary drivers of governmental interest.
Take, as a particularly pertinent example, the debate over AI-driven autonomous weapons. Most militaries insist that while they are working to integrate AI to maintain strategic advantage, a human will always make the decision to employ weapons. Yet, as the war in Ukraine has demonstrated, electronic warfare and jamming are crucial to the defence against small drones. As those techniques improve, the most logical countermeasure will be to build autonomous target-recognition and decisionmaking capability into weapons—meaning that a drone whose connection to its human operator has been lost or severed could still complete an attack based on its own ability to recognise and categorise potential targets.
It is important not to overstate the existing impact of autonomy or drones on the battlefield. In Ukraine, for example, despite the proliferation of small weaponised drones (and the accompanying proliferation of strike videos), artillery remains a far deadlier threat, as well as having significant advantages such as relative immunity to inclement weather. Autonomous systems—either integrated directly into weapons or set back from the front lines to support command decisions—will certainly offer advantages in terms of speed and the ability to process complex information quickly. One of the brutal lessons of Ukraine, however, is that conventional land warfare remains heavily dependent on the ability of combatant forces to absorb horrific human losses without collapsing.
More to the point, much of AI’s broader potential depends upon its integration with human users and into human systems. An enhanced decisionmaking system where AI sorts through enormous data sets and presents options to human operators (be they military commanders or politicians) could theoretically be an enormously powerful tool that transforms diplomacy, intelligence, and security. Yet if such a system overlooks or miscategorises key data, presents it in a confusing or untrustworthy fashion, or fails to respond predictably to operator input, it may be worse than useless: it could endow its users with false confidence while actively inhibiting or artificially circumscribing their actions. Cases like the shooting down of Iran Air 655 by the USS Vincennes in 1988—where a combination of user error and poor interface design aboard the warship’s AEGIS combat management system led to a catastrophic outcome—demonstrate the profound importance of ensuring that information technology is designed to complement rather than hinder human decisionmaking.
Where does this leave the future of geopolitics? The key point in understanding the AI ‘arms race’ is that it is layered atop, rather than distinct from, existing power structures and mechanisms in the global order. Emerging technologies do not spring fully formed from a single genius; they require enormous infrastructure, vast investment, and extraordinary amounts of human capital—and even then, a laborious process of failure and iteration must occur before they begin to approach their potential.
The key point in understanding the AI ‘arms race’ is that it is layered atop, rather than distinct from, existing power structures and mechanisms in the global order.
A recent report suggests that the concentration of AI resources—computing power, major companies investing in AI research, universities with significant computer science departments—in the existing superpowers makes it unlikely that AI will allow any middle powers to ‘leapfrog’ onto the top table on the basis of artificial intelligence. Nor do any major powers seem to be hedging against AI: the Chinese government has long been heavily invested in AI and autonomous technologies, viewing advancement in these fields as key to achieving its geopolitical goals. The U.S. government has generally taken a similar position, and the Trump administration was particularly forceful in giving major investors in AI central roles in government, while indicating a desire to rebalance defence spending away from traditional military hardware towards emerging technology. Even smaller powers are betting heavily on AI—the UK government, for example, recently outlined a plan to ‘unleash AI’ across government.
In short, governments that see themselves as engaged in geopolitical competition at some level all appear to be betting that AI will enable them to improve their relative positions. However, given the concentration of resources and the nature of AI development, there seems to be limited chance that AI might allow smaller powers to ‘leapfrog’ their way to greater influence. More likely, the two major powers will set the conditions around AI development and use, with which smaller countries will largely have to contend—though the presence of specific smaller states in crucial parts of the supply chain (as Taiwan is with semiconductors) could create bottlenecks and sharpen competition. AI might also, in specific circumstances, hyper-empower nonstate armed groups: its ability to generate malicious computer code, aid in the development of novel biological weapons, and provide a cheap alternative to guided munitions—once solely the preserve of superpower militaries—injects an element of instability and profound unpredictability into geopolitics.
To return to the question of indirect effects, appreciating AI’s true capabilities helps us understand where it may impact geopolitics. Though estimates vary, we may be decades away from achieving true human-level intelligence (or longer). Nevertheless, if multinational corporations view AI agents as sufficiently sophisticated to handle the bulk of customer service tasks—and if they do not face significant consumer backlash for making this replacement—the resulting surge in unemployment in regions where providing cheap labour for customer service was once a mainstay could contribute to political unrest. A national army (or an armed nonstate actor) might acquire a new autonomous capability that provides a transformational tactical edge—or it might become overconfident, overextended, and ultimately overwhelmed. In short, AI may well transform geopolitics, but our willingness to believe in its transformative power may prove to be its greatest impact of all.
Source: rand.org