- Emerging technologies, geopolitics and trade wars have profound consequences for innovation, governance and economic resilience.
- They are also redefining risks of escalation and future crises may stem from algorithmic error or unregulated dual-use technologies.
- Without urgent action, frontier technology governance risks being a casualty of strategic rivalry and domestic political polarization.
Artificial intelligence (AI)-powered drones are reshaping battlefields. Deepfakes are disrupting elections. Quantum breakthroughs could soon crack today’s encryption. As geopolitical fractures deepen, the world lacks the guardrails needed to manage the risks of frontier technologies.
As technological supremacy increasingly defines economic and national security, global cooperation is giving way to geopolitical fragmentation, with profound consequences for innovation, governance and economic resilience.
Emerging technologies are advancing rapidly – and so are the risks. Beyond AI, breakthroughs in synthetic biology, quantum computing, hypersonic missiles and autonomous weapons are rewriting the rules of competition. These technologies are increasingly used to project power, test alliances and escalate conflict, blurring the line between military and civilian domains.
Future crises may not stem from deliberate provocation, but from algorithmic error or unregulated dual-use tools – technologies that are built for civilian benefit but repurposed for coercive or destructive aims.
The dangers are no longer theoretical. AI-powered drones are already reshaping battlefield dynamics in Ukraine. Gene editing and machine learning can be used to create or modify viruses with bioweapons potential. Quantum computers could soon crack current encryption, threatening financial systems and critical infrastructure.
Meanwhile, in elections from Taiwan to the US, generative AI (GenAI) is supercharging disinformation and eroding public trust. And these risks are emerging amid deepening geopolitical fractures.
Then-US president Joe Biden and China’s President Xi Jinping last year issued a joint statement agreeing that humans – not machines – must control nuclear weapons. It was a historic acknowledgment by China of AI’s risks, but whether such commitments hold under pressure remains to be seen. To date, 58 states have endorsed the Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy. It’s a start, but it remains voluntary, non-binding and ultimately insufficient.
Global cooperation at a crossroads
The World Economic Forum’s Global Cooperation Barometer 2025 finds that international collaboration is stagnating. While 2024 saw some momentum on climate, tech and health, early signs in 2025 point to declining trust, rising trade fragmentation and faltering cooperation on key global issues.
Without urgent action, frontier technology governance – already a complex challenge – risks becoming another casualty of strategic rivalry and domestic political polarization.
Governments cannot govern frontier technology alone. The private sector is driving innovation, but regulation continues to lag. Open-source tools are being weaponized faster than safeguards can be built.
As these systems evolve faster than our understanding of them, governance must be adaptive, like software updates, to match the pace of change. Voluntary codes and advisory boards help, but they cannot replace global rules or, at the very least, internationally agreed-upon standards that align innovation with security.
In the absence of shared guardrails, both companies and countries are prioritizing speed and market dominance over stability and risk mitigation, thereby widening the gap between innovation and accountability
AI is already being used in some countries to suppress dissent, expand surveillance and shape public narratives under the banner of national security – meaning the line between humanitarian and military AI deployment is vanishing.
Meanwhile, computer power and semiconductors are also becoming geopolitical choke points. As the US, the EU and China pursue diverging regulatory paths, innovators in emerging markets risk being locked out.
Global South perspectives are largely missing from the rule-making tables, but a governance framework built only by and for the few cannot be legitimate or sustainable. By contrast, international frameworks offer a bridge and shared global principles can help enable responsible innovation even among strategic rivals.
Technology governance as a strategic advantage
Governance must no longer be seen as red tape, but as infrastructure for innovation. Aligning corporate incentives with global security goals is not just ethical – it is strategic. Regulation is about more than compliance; it’s about market access, investment readiness and long-term trust.
Importantly, this kind of governance infrastructure also enhances global trust and resilience. Systems built with transparency, accountability and oversight foster confidence across borders, enabling cooperation, even in a fractured world.
Robust governance is a signal that a company is future ready, while technology ecosystems without transparency, testing and oversight risk losing investor confidence, global customers and public legitimacy.
According to a recent Forrester report, AI governance software spending is expected to quadruple by 2030, reaching $15.8 billion. The question is no longer if AI should be governed, but how and by whom.
Models are emerging. OpenAI’s partnership with US national laboratories to monitor chemical, biological, radiological and nuclear (CBRN) risks is one example of how public-private collaborations can realign innovation with global safety. But ad hoc arrangements driven by corporate altruism are not enough.
METR research shows wide variance in companies’ AI safety protocols: some include strong deployment safeguards; others omit them entirely. This is where a shared global governance approach, grounded by common principles, norms and interoperable frameworks becomes essential.
Institutions like the UK’s AI Safety Institute and US National Institute of Standard and Technology (NIST) offer promising blueprints by bridging technical expertise and policy, evaluating frontier models and building scalable risk registries. France’s $400 million public endowment for AI represents a shift from ethics talk to infrastructure funding.
Meanwhile, Global South countries are positioning themselves as neutral, trusted hubs that could contribute to a more globally distributed governance ecosystem. These emerging efforts highlight the potential for alternative models and underscore the importance of avoiding a future in which oversight is dominated by a few.
While national policies and corporate protocols are necessary components of responsible technology governance, a coherent global approach – grounded in shared principles and inclusive of diverse perspectives – is vital to build trust, reduce fragmentation and align innovation with public interest.
From fragmentation to shared stewardship
To shift from awareness to meaningful action, public and private actors must move from fragmented, reactive responses to proactive coordinated governance grounded in shared principles. That means embedding “safety by design”, realigning market incentives and building the institutional capacity to monitor and adapt, and enforce these principles as technologies evolve.
While verification and enforcement remain deeply uneven – and in some domains, inherently limited – they are still necessary aspirations. Fully credible oversight may not be possible, but it should be treated as a long-term goal.
Achieving this will require new institutions, technical breakthroughs and political will. In the meantime, governance must focus on building partial, adaptive systems that incentivize responsible behaviour even without perfect enforceability.
To do this, public and private sector stakeholders must:
- Codify human control: Ensure that no AI system can initiate nuclear or conventional strikes.
- Modernize global rules: Update treaties and verification mechanisms to address dual-use technologies and emerging risks including by advance cooperative approaches to transparency, accountability and responsible use, even where traditional verification and enforcement fall short.
- Convene inclusive global dialogues: Use platforms like the World Economic Forum’s Global Technology Retreat and the AI Safety Summits to align governments, companies, and civil society.
- Invest in resilience: Build independent oversight bodies, fund cross-border safety research and train tech-literate diplomats who can bridge engineering and diplomacy.
Absence of frontier technology governance ‘a global risk’
Frontier technologies are accelerating just as geopolitical tensions are escalating and the cost of inaction is too high. The absence of shared guardrails isn’t merely a governance failure – it’s a global risk.
As Tech Cold War authors Ansgar Baums and Nicholas Butts warn, “We do not need to agree on everything, but we must agree on some things, and we must talk about them in time.”
https://www.weforum.org/stories/2025/06/frontier-technology-governance-fragmented-world/