he UK government recently published its plan for using AI to boost growth and deliver services more efficiently. It also suggests a fundamental shift in how the United Kingdom aims to position itself as a global leader in AI innovation.
The AI Opportunities Action Plan gives further evidence on how the government intends to regulate cutting-edge AI.
The timing of this plan, ahead of the Paris AI Action Summit in February, positions the United Kingdom to play a significant role in shaping global discussions on AI governance. Its plans to give more powers to the AI Safety Institute (AISI), a directorate of the Department of Science, Technology, and Innovation, could enhance the United Kingdom’s influence in international cooperation on AI safety and governance through leading the way on legislation and enforcement.
The previous Conservative government’s approach, outlined in its Pro Innovation Approach to AI Regulation white paper, relied heavily on existing regulators and nonbinding principles.
The AI Opportunities Action Plan gives further evidence on how the government intends to regulate cutting-edge AI.
But Secretary of State for Science, Innovation, and Technology Peter Kyle has said there will be a significant shift in the United Kingdom’s regulatory approach, moving from voluntary cooperation to mandatory oversight of the most advanced AI systems. After looking at these systems, regulators could ask the tech companies to make changes.
The government is proposing the Frontier AI Bill, which would make the AISI into a statutory body with the ability to have legal powers rather than just advise companies. The bill could also grant the AISI unprecedented powers to tell developers to share their models for testing before market release and offer feedback.
This new regulatory shift differs from the European Union’s approach in two important ways. First, the European Union has opted for a voluntary code of practice for general-purpose AI systems.
The EU AI Act takes a comprehensive approach, regulating AI applications across various risk levels and sectors, from high-risk applications in health care and education to consumer-facing AI systems. In contrast, the United Kingdom’s proposed bill appears more narrowly focused on cutting-edge AI systems before they’re released.
The government plans to go ahead with 48 out of 50 of the report’s recommendations to start with. This demonstrates a strong commitment to developing the necessary foundations for AI advancement. There are also “partial” agreements to consider visa plans for workers who are highly skilled in AI and the creation of a copyright cleared dataset for training or improving AI systems.
These measures aim to address crucial gaps in the United Kingdom’s AI ecosystem. The focus on infrastructure and developing AI skills suggests maintaining competitiveness in AI requires more than just a favourable regulatory environment, it needs robust capital investments.
However, several challenges remain. The focus on advanced AI systems, while important, has drawn criticism for potentially overlooking broader AI-related risks. There are legitimate concerns about whether this approach adequately addresses the full spectrum of challenges posed by widespread AI adoption across different sectors and cases, such as developers using copyrighted material to improve their AI systems.
The success of this new approach will largely depend on several factors. The ability to introduce effective premarket testing procedures for cutting edge AI systems without creating excessive barriers to innovation. And also it depends on the capacity of regulators to balance oversight and innovation.
Success will also hinge on the effectiveness of these initiatives in strengthening the United Kingdom’s competitive position.
The United Kingdom’s approach represents a bold experiment in AI governance—one that charts a distinct path from the European Union.
This plan marks a decisive moment in UK AI policy. The success or failure of this targeted approach could have significant implications for how other nations balance comprehensive AI oversight with focused regulation of the most capable systems.
Source: rand