AI and New Geopolitics: Rethinking Power in the Digital Era

Cuihong Cai is Professor of International Relations at the Center for American Studies at Fudan University and serves as the deputy director of the Center for Global AI Innovative Governance. Her research is situated at the intersection of technology and global politics, with a particular focus on cyber diplomacy, digital governance, and the evolving dynamics of U.S.-China relations in the digital age. Over a distinguished academic career, Dr. Cai has explored the impact of emerging technologies on national security and the international order. She is the author of several influential books, including Cyberpolitics in U.S.-China Relations, Global Cyber Governance, and Cyber Governance in China: Balancing State Centrism and Collaborative Dynamics. Her scholarly work has been published extensively in leading Chinese and English journals, establishing her as a prominent voice in the study of cyber sovereignty and international technological competition. Dr. Cai has held prestigious fellowships and visiting scholar positions at the Yale University, the University of California, Berkeley, and Georgia Institute of Technology.  

Terry Wu ‘28 interviewed Dr. Cuihong Cai on Friday, April 3, 2026.

Photograph and biography courtesy of Dr. Cuihong Cai.

Scholars of international relations often point to past technological shifts, like nuclear weapons or the rise of the internet, as moments that redefined power and security. As AI rapidly advances, do you expect AI to lead to a similar or a fundamentally different kind of transformation in the global balance of power?

AI represents a transformation comparable to earlier technological shifts, but in crucial ways, it is fundamentally different. Nuclear weapons reshaped global security through deterrence and existential risk, while the internet transformed connectivity and the flow of information. AI is emerging as a general-purpose technology that combines elements of both.

Jensen Huang describes AI as a full-stack technology, spanning from infrastructure and energy to models and applications. It reshapes diverse dimensions of power, including economic productivity, military decision-making, governance, and knowledge production.

This means AI is the new integration of systems. For example, in the Russia-Ukraine war, AI has accelerated decision-making cycles through real-time data processing, satellite imagery, and targeting systems.

Beyond the military, AI is transforming economic and industrial systems. Large Language Models (LLMs), such as ChatGPT and DeepSeek, are embedded into workflows across software development. At the same time, competition over AI infrastructure, particularly semiconductors and energy, has intensified. This reflects what scholars call “weaponized interdependence,” where control over key chokepoints in global supply chains can translate into geopolitical leverage.

Unlike the nuclear era, where power was concentrated in a small number of strategic assets, AI diffuses power across interconnected systems. As a result, the emerging global order has started to become a persistent competition embedded within deep technological interdependence. AI is reshaping the very structure through which power is exercised and contested. The central question, therefore, is not only who leads in AI, but what kind of system this competition is creating.

How do differences in political systems shape each country’s approach to AI regulation and deployment?

Success in AI is predominantly a matter of state capacity and coordination. China’s strength lies in its ability to align industrial policy with infrastructure, allowing for the seamless integration of AI into smart cities and massive ecosystems like WeChat. The United States, however, looks to the private sector, companies like OpenAI and Google, to lead the change. These different engines of growth produce distinct trajectories. While the U.S. pushes the boundaries of frontier models, China excels at scaling and diffusing technology across its vast market. The competition is a contest over who can most effectively weave these technologies into society.

Timing also plays a critical role in how these nations approach oversight. The United States typically follows a “deploy first, regulate later” philosophy, allowing LLMs to reach the public well before formal frameworks are established. China, however, integrates oversight into the development phase, often requiring AI providers to clear content and security hurdles before a full-scale release. This contrast reflects two distinct philosophies on how to balance the speed of innovation with the necessity of risk management.

Regional differences in data governance further illustrate these diverging paths. While the EU prioritizes individual privacy through the General Data Protection Regulation (GDPR), the U.S. maintains a decentralized, sector-specific model. China takes a different route, centering its framework on data security and systemic risk management. Though their institutional priorities vary, all three are ultimately grappling with ways to govern data in an economy driven by AI.

You have argued that AI can handle large-scale content moderation but lacks the reasoning required for policy. In global governance, is the bigger challenge developing shared ethical standards or managing differences between national regulatory approaches?

It is a mistake to view AI as a tool for content moderation alone while ignoring its policy implications. Given its ability to perform large-scale filtering to high-level decision assistance, AI is already deeply embedded in the policy process. The focus should be about how institutional structures and human judgment adapt to its growth.

On a normative level, a surprising degree of convergence has already emerged. Through the work of the OECD, UNESCO, and the AI Safety Summit, a shared vocabulary of safety, accountability, transparency, and human oversight has taken root. The challenge, therefore, is not a lack of ethical consensus. In fact, the foundational principles are already in place.

The real friction begins at the institutional level, where this consensus often fragments. Translating broad technological ideals into functional governance reveals deep structural divides. The EU prioritizes a risk-based, rights-oriented framework; the U.S. is more decentralized regulatory structure; and China embeds security and control directly into the early stages of deployment. The major hurdle is how these variations are prioritized and operationalized within different systems.

Political realities further complicate this institutional divergence. As AI becomes closely intertwined with national security and industrial policy, regulatory choices shift from technical debates to geopolitical maneuvers. These shifts are rarely about code alone. When strategic vulnerabilities like data flows and semiconductors are at stake, the desire for international alignment vanishes. Instead, states prioritize the pursuit of national interests.

In your work, you describe the “AI-driven kill chain" as an “Oppenheimer moment” for military ethics. Given the sheer speed of these tactical systems, is it still realistic to maintain the “human-centric” responsibility loop that you advocate? Or do you expect this technology to strip us of our role as effective gatekeepers?

Describing the rise of AI as an "Oppenheimer moment" is useful for capturing the ethical shock of a major technological breakthrough, but the comparison is incomplete. Unlike nuclear weapons, which concentrate destructive power in a single, discrete system, AI distributes decision-making across complex, socio-technical networks. The fundamental challenge, therefore, is the diffusion of responsibility across these networks.

It is undeniable that AI is compressing decision time and accelerating the "kill chain," the process from detection to action. In contexts like missile defense or drone swarms, reaction times can approach or even exceed human cognitive limits. However, its speed relocates human agency. Human control is shifting from real-time intervention at the moment of action to the earlier stages of the system's development.

This redistribution of responsibility occurs across three distinct phases. The “Design Stage” is when humans define the core objectives, training data, operational constraints, and rules of engagement. Responsibility is honed into the system’s architecture before it ever reaches the field. Next, the “Deployment Stage” is when human judgment remains essential in deciding where, and under what specific conditions, a system is permitted to function. In the final “Operational Stage,” human involvement follows a spectrum subsidiary oversight. A “human-in-the-loop” model requires a person to actively authorize every action, whereas a “human-on-the-loop” allows the system to function autonomously while a human supervisor monitors and retains the power to veto or override decisions. In high-speed tactical scenarios, such as missile defense or drone swarms, the system may operate out-of-the-loop, meaning it executes actions fully autonomously because the required reaction time exceeds human cognitive limits. In these cases, the "gatekeeping" is simply relocated to the earlier stages of design and deployment.

The challenge for the future is to ensure that as decision-making accelerates, our frameworks for accountability evolve to meet the speed of the technology.

China has long emphasized cyber sovereignty, prioritizing state control over data and digital infrastructure. How does this principle work alongside China’s global AI governance strategy, especially in contrast to more open or market-driven models in the U.S. and Europe?

The relationship between cyber sovereignty and global AI governance is often framed as a conflict, but they actually function on two distinct and complementary levels. Cyber sovereignty serves to define the internal boundaries of authority, establishing who is responsible for data and infrastructure within a specific jurisdiction. Global governance, by contrast, acts as the mechanism for coordinating across those sovereign boundaries. The central challenge is how distinct systems can achieve meaningful interoperability.

Cyber sovereignty and global governance are not mutually exclusive. Sovereignty provides the internal structure of authority, while global governance seeks the common ground necessary for safety standards and risk mitigation across different regimes. In my work, I characterize the Chinese approach as a hybrid model rather than a monolith of state power. This system combines centralized strategic coordination with collaborative implementation involving private platforms and actors, allowing the state to maintain a clear sovereign anchor while retaining the flexibility required to navigate a volatile technological landscape. Having this hybrid perspective helps identify exactly where genuine cooperation on global AI standards remains possible.

There is often a tendency in Western discourse to frame China’s digital governance model primarily through the lens of control and surveillance. But your work suggests a more complex interplay of state capacity, economic strategy, and technological development. In that context, how should Western policymakers better understand China’s digital governance model?

The Western tendency to frame China’s digital governance solely through the lens of control captures a real facet of the system, but it remains analytically insufficient. We should instead view the model as coordinated governance under multi-objective optimization. In this framework, the state simultaneously balances security and technological capability.

A vital dimension of this model is governance as capacity. China’s approach is defined by its ability to operationalize complex systems on an immense scale. This execution capacity is visible in the physical rollout of infrastructure. China has deployed over 4.8 million 5G base stations, accounting for roughly 60% of the global total.

Furthermore, China treats regulation and political economy as intertwined domains. Digital oversight is rarely a standalone constraint. It is viewed as a tool of industrial policy. The scale of the digital economy, which contributed roughly 40% of China's GDP in recent years, demands that regulation aligns platform ecosystems with broader goals in manufacturing and logistics. For instance, the state has built a layered system design. Recent generative AI regulations mandate security assessments and content standards before a technology is fully deployed. This pre-emptive architecture allows the state to manage systemic risk without stalling the integration of AI into the economy.

Each region structures the trade-offs of an AI-driven economy according to its own institutional premises. For Western policymakers, the first step toward a grounded international dialogue is recognizing that China’s model is a systematic effort to optimize for growth and stability simultaneously.

Terry Wu '28Student Journalist

ITU Pictures

Share this:

Leave a Reply

Your email address will not be published. Required fields are marked *