This report by Singapore-based expert Damien Kopp analyses 25 national AI strategies and reaches an unequivocal conclusion: today’s pursuit of autonomy is being built on borrowed technologies. We have entered the era of the “sovereignty paradox.” The more governments and companies invest in building their own AI, the more they reinforce their structural dependence on a small number of foreign suppliers for chips (GPUs), cloud infrastructure, and foundation models.
AI is no longer just a technology; it has become a geopolitical supply chain, comparable to energy, structured around a handful of critical chokepoints. Each link in this chain is concentrated, territorially anchored, legally constrained—and therefore potentially instrumentalised. The strategic question is no longer one of ownership, but of control over critical dependencies and the ability to continue operating in the event of disruption.
A comparative analysis of 25 countries shows that only the United States and China come close to “full-stack” sovereignty (hardware, cloud, models, data). For the rest of the world, the reality is one of managed dependency, expressed through four main archetypes:
– 1 The partnership model (e.g. United Arab Emirates, United Kingdom, Australia): these countries prioritise speed. They accept deep technological dependence on U.S. tech giants in exchange for immediate access to state-of-the-art capabilities.
– 2 Regulatory sovereignty (e.g. France, Canada, Germany): this is the European path. Control is exercised through law (AI Act, GDPR) over infrastructure that is not owned. It is a powerful trust-building lever, but one that does not resolve dependence on foreign chips and cloud infrastructure.
– 3 “Full-stack / hybrid” sovereignty (e.g. United States, China, South Korea, Japan): these nations invest heavily in hard infrastructure—semiconductor fabs and national compute capacity—to reduce their dependence over the medium term.
– 4 The “Open-Yet-Local” model (e.g. India, Brazil, Singapore): unable to compete on hardware, these countries focus on linguistic inclusion and control over local use cases, relying on foreign infrastructure while retaining control over the cultural and application layers.
None of these models fully escapes dependency. But some manage it strategically, while others simply endure it. The paradox is clear: the faster a country seeks to deploy AI, the more it may end up reinforcing the very dependencies it claims to reduce.
« AI is no longer just a technology: it has become a geopolitical supply chain »
KEY RECOMMENDATIONS
For both enterprises and public decision-makers :
1. Treat AI as a strategic business continuity risk, on par with energy and critical supply chains.
2. Make dependencies visible and measurable through dedicated tools such as the Digital Resilience Index, in order to prioritize investments where disruption would have the greatest impact.
3. Build resilience by design through technology-agnostic architectures, supplier diversification, exit capabilities, and local control over data and models.
4. Elevate AI governance to board level, fully integrating geopolitical, energy, and industrial dimensions.
KEY LEARNING FOR EUROPE
Europe’s strategy should lie at the intersection of two complementary archetypes:
Regulatory sovereignty (Archetype 2), grounded in a robust European regulatory framework based on the rule of law, the protection of fundamental rights, accountability, governance, and trust;
Paired with the Open-Yet-Local approach (Archetype 4), which combines openness with sovereignty through use by retaining control over critical use cases, strategic data, and sector-specific
value chainsTogether, these two approaches outline a distinct European path: neither technological autarky nor passive dependence, but a pragmatic and internationally relevant « third digital way ».
« Most AI strategies depend on critical foreign chokepoints. It is time to measure those dependencies and build proactive resilience ».

