The UK has recently announced investment agreements with several large US tech companies. But there are risks in over-dependence on these firms. In this post, Aleks Turobov argues that a truly sovereign path for AI requires building domestic capacity and embracing a multipolar world of digital rule-making.

London’s recent tech announcements – a ‘Sovereign AI Unit,’ a landmark compute deal with NVIDIA, a new pact with Washington – mark a strategic juncture. The question is whether the UK builds institutional capacity to shape AI rules or accepts foreign technology on external terms. To secure sovereign capacity, the UK should mandate that a rising percentage of public-sector AI procurement goes to domestic firms with data sovereignty provisions, and lead a coalition of middle powers in co-developing interoperable AI standards.
The debate has split along familiar lines. One side emphasises the importance of scale and access to cutting-edge compute to accelerate deployment across public services. The other warns of over-reliance on foreign firms for critical infrastructure. However, what matters strategically is the locus of control, not the source of technology. Without sovereignty over the data and infrastructure that underpin AI systems, the UK risks lacking the enforcement power to audit algorithms, enable competition, or set standards aligned with national interests.
AI or digital sovereignty, is a matter of statecraft: a strategic commitment to develop and control AI infrastructures (sovereignty over AI), as well as the use of AI tools for governance and security (sovereignty through AI). Current policy obscures the insight that sovereignty over critical infrastructure and data is a precondition for effective sovereignty through AI – and for credible partnerships with technology firms on shared terms. This is what will make collaboration durable.
Effective digital statecraft derives from institutional capacity – the authority and legitimacy to set rules, mobilise domestic resources, and enforce standards – not from financial resources alone. A government that can only deploy AI through a single vendor’s terms of service has not acquired capability – it has rented access, on terms that can be revised by decisions made elsewhere.
The distinction between partnership and dependency turns on enforcement capacity. States that control the data and infrastructure layer of AI systems possess five critical levers that states relying solely on vendor relationships lack.
- First, the audit authority. Proprietary algorithms deployed in public services – from welfare assessments to healthcare triage – require independent verification for accountability and legitimacy. Without access to model weights, training data, and decision logic, governments are unable to assess bias, accuracy, or compliance with national regulations. While the UK AI Security Institute evaluates frontier models for risks, what’s needed is audit authority over the hundreds of deployed systems where access depends on vendor consent (e.g. in NHS trusts, welfare offices, and councils). Infrastructure sovereignty establishes the legal and technical basis for mandating algorithmic transparency, as a contractual and regulatory requirement.
- Second, data localisation for sensitive functions. Control over where data is stored and processed enables governments to enforce jurisdictional rules on privacy, security, and access. When sensitive citizen data flows through foreign-controlled infrastructure, it becomes subject to the legal reach of foreign governments, including intelligence frameworks and sanctions regimes.
- Third, competitive procurement environments. Infrastructure sovereignty prevents vendor lock-in by enabling multi-vendor architectures and interoperable standards. A government that owns its data infrastructure can switch between providers, negotiate on price and performance, and require compatibility across systems. Without this, procurement becomes path-dependent: the first vendor selected becomes the only viable option, eliminating competitive pressure.
- Fourth, standards-setting authority. States with domestic AI capacity can shape global standards rather than merely adopting them, including safety evaluations, interoperability protocols, and performance benchmarks. The ability to co-develop standards with partners, rather than implementing standards set elsewhere, is itself a form of geopolitical leverage.
- Fifth, hybrid models. Infrastructure control enables governments to combine domestic innovation with foreign partnerships strategically. A UK-controlled data layer allows domestic startups to build specialised applications on top of standardised infrastructure, creating a competitive ecosystem where foreign and domestic firms coexist rather than displacing the other. Without this, the UK risks repeating the DeepMind pattern: world-class research that, lacking domestic scale-up infrastructure, must seek foreign acquisition to reach the market, leaving the UK with influence but not sovereign control over the capability it nurtured.
This is the architecture of a durable partnership, the institutional capacity to negotiate terms, enforce standards, and build domestic capability while engaging with foreign firms.
The institutional and technical infrastructure for an asymmetric relationship unfortunately is in place. US export controls, cloud service terms of use, and sanctions frameworks explicitly reserve the right to terminate access to critical technologies in alignment with US foreign policy objectives. Recent analysis shows how Washington could activate this ‘kill switch‘, revoking European access to US cloud infrastructure or AI models through executive action, with no recourse for affected governments.
The domestic cost is equally significant. Every major AI procurement contract awarded to a foreign vendor without requiring data sovereignty or interoperability represents a decision not to build domestic capacity. The UK possesses world-class AI research institutions and a vibrant startup ecosystem. Yet when a Whitehall department or NHS trust deploys AI, it increasingly does so through a single foreign vendor’s platform, under terms set in Silicon Valley and subject to revision without UK consent.
This dynamic resembles what some scholars have termed a ‘Big Tech Coup‘ – the wholesale outsourcing of state functions to unaccountable firms whose jurisdictional obligations are to foreign shareholders and courts. The result is a ‘technopolar paradox‘: states invest heavily in AI infrastructure while simultaneously ceding the institutional capacity to govern it.
The risk is not that the UK lacks access to advanced technology; clearly, it has that. The risk is that access without enforcement power creates structural vulnerability: a government that cannot audit the algorithms managing its energy grids, contest the terms governing its healthcare data, or require interoperability across public services has not acquired sovereignty through AI. So what is the alternative?
The current debate presents a false binary: align with Washington or risk irrelevance. This framing overlooks the multipolar digital order already emerging beyond the US-China rivalry. Middle powers and regional blocs are forging distinct approaches to AI governance that prioritise strategic autonomy without economic isolation. The African Union’s problem-solving performative approach to AI policy demonstrates how resource-constrained actors can leverage policy design to shape markets, using procurement standards and regulatory frameworks to attract investment on African terms rather than as passive recipients of technology. ASEAN’s interoperable digital economy frameworks prioritise cross-border data flows and mutual recognition of standards, creating a regional market that no single vendor can ignore. APEC’s principles-based approach to AI governance provides a non-binding framework for coordination, enabling diverse national implementations while fostering consensus on core issues such as transparency and accountability. They represent a pragmatic approach to statecraft, building diverse partnerships, championing open standards, and cultivating local ecosystems to create negotiating leverage with major technology powers. The UK should recognise the strategic logic they embody.
The UK, with its regulatory sophistication, research excellence, and convening power, is well-positioned to lead a coalition of digitally ambitious middle powers, such as Canada, Australia, Japan, South Korea, and Singapore, in co-developing interoperable standards for AI safety, data portability, and algorithmic transparency. Such a coalition would create a substantial global market for AI solutions that meet shared standards, reducing dependence on any single jurisdiction’s technology base. It would establish mutual recognition of safety assessments, enabling firms certified in one jurisdiction to operate across the coalition. And it would provide collective bargaining power in negotiations with major technology firms.
Translating this analysis into policy requires a twin-track approach: building domestic capacity while forging international coordination:
- First, mandate a ‘Sovereign Public Sector’: legislate that a significant and growing percentage of non-defence AI procurement across central government, NHS trusts, and local councils be reserved for UK-owned and based firms, similar to established advance market commitments in broadcasting and defense procurement. This creates a predictable domestic market that incentivises investment and retains talent. Partnering with US technology companies can accelerate learning and capability building, provided that UK firms gain the expertise to develop tailored, British values-based AI solutions for public services. Procurement contracts should include data sovereignty provisions.
- Second, forge a ‘Middle Powers’ technology alliance: shift diplomatic effort toward building a coalition for co-developing interoperable AI standards. Focus initially on mutual recognition of safety assessments, data portability frameworks, and algorithmic transparency requirements. Establish shared testing facilities and evaluation protocols. Create a common market for AI solutions that meet coalition standards, reducing collective dependence on US and Chinese technology while maintaining constructive engagement with both. This converts the UK’s research excellence and regulatory sophistication into geopolitical leverage.
The UK’s AI investments lay the foundations for embracing and building institutional capacity to govern AI. But sovereignty over critical infrastructure and data is what will make the partnerships durable.
Image: Pete Linforth from Pixabay
The views and opinions expressed in this post are those of the author(s) and not necessarily those of the Bennett Institute for Public Policy.