Jeffrey Ding on the Diffusion of AI Technology

Jeffrey Ding is an Assistant Professor of Political Science at George Washington University. Previously, he was a postdoctoral fellow at Stanford's Center for International Security and Cooperation, sponsored by Stanford's Institute for Human-Centered Artificial Intelligence. His research agenda centers on technological change and international politics. His book project investigates how past technological revolutions influenced the rise and fall of great powers, with implications for U.S.-China competition in emerging technologies like AI. Other research papers tackle how states should identify strategic technologies, assessments of national scientific and technological capabilities, and interstate cooperation on nuclear safety and security technologies. Jeff's work has been published in Foreign Affairs, Security Studies, The Washington Post, and other outlets. Jeff received his PhD in 2021 from the University of Oxford, where he studied as a Rhodes Scholar. He has also worked as a researcher for Georgetown's Center for Security and Emerging Technology and the Centre for the Governance of AI at the University of Oxford.

Kevin Wang ’27 interviewed Professor Jeffrey Ding on April 30, 2025.

In your 2024 book Technology and the Rise of Great Powers: How Diffusion Shapes Economic Competition, you argue that we should focus more on how states adapt to technological innovation, as opposed to what the innovations are and who developed them. What institutions and policies are the most effective in adapting to technological advancement?

The book argues that in past industrial revolutions, the countries that achieved technological leadership were not necessarily the ones that monopolized cutting edge innovations but were the ones that diffused and adopted these general-purpose technologies (GPTs) across their entire economy. I focused on institutions and policies that cultivate education and training institutions that widen and broaden the base of engineering skills and talent associated with the GPT. For example, for artificial intelligence (AI), this means investing in and supporting alternative pathways for people to gain AI engineering skills, such as through community colleges or other skill certification programs.

In your book, you introduce the concept of GPT infrastructure. How different are Chinese and U.S. GPT infrastructure, and how does this impact the ongoing AI arms race between the two countries?

The United States is better positioned than China to develop the skill infrastructure for AI diffusion. The United States has a broader pool of institutions that can train ordinary AI engineers to implement large language models (LLMs) across a wide variety of application sectors, and the U.S. AI ecosystem has stronger linkages between industry and academia to disseminate and share ideas across those different parts. So, the United States is very well positioned in this AI competition, especially when it's anchored around a competition over GPT diffusion.

Huawei is testing a new AI processor, and the company says that it hopes that it can replace some high-end chips from Nvidia. For a long time, the United States has been trying to stop China from developing advanced AI and chip technologies. What are some opportunities and challenges Huawei might face in this quest?

U.S. export controls on Nvidia chips create an opportunity for Huawei because they eliminate its biggest competitor in China. A challenge is many Chinese companies would still prefer to use Nvidia chips, not just to train AI models, but to implement and run AI models, which is the inference stage. A bigger challenge is software-hardware integration, including Nvidia's Compute Unified Device Architecture (CUDA) framework. Huawei will face challenges in overcoming developers’ preferences for using the established software-hardware integrations that work well with Nvidia chips.

In the past, Huawei claimed several times that its new chips would rival that of Nvidia, but Huawei’s chips fell below expectations almost every time. How likely will it happen again?

We have seen this pattern where Huawei announces that they have exceeded Nvidia on some benchmarks, but when people dig beneath the details, there are concerns about efficiency and the cost of energy consumption associated with using Huawei chips as opposed to Nvidia chips. This pattern will likely play out again. The question is, if Chinese companies have no alternative other than Huawei, Huawei and other Chinese companies might still benefit. Eventually, Huawei will build up its customer base and override Nvidia's economic moat. Over time, Huawei will be able to reinvest those gains into developing better technology. The concern is that Nvidia's monopoly and economic moat will not last and will eventually fade away,

How capable is China in integrating its AI technology into its military and economy?

It is still very early days in actual integration of AI into productive economic processes and effective military application sectors on a wide scale. This is normal for a GPT. We have seen in the past that it takes multiple decades from the initial innovation of a GPT to its impact on economy-wide productivity. For China, we see a lot of momentum and hype about DeepSeek and other Chinese AI models, but it is still too early to tell whether and to what extent China will be effective at adopting AI at scale, not just in the economy, but also in the military. One of the factors that is important to consider is that a lot of Chinese companies are still not using cloud computing services, which is an enabling technology for adopting some of these AI models into productive business processes. My book argues that China faces a diffusion deficit in this space.

In Israel, the military is using AI to make some autonomous decisions, which is unnerving a lot of people. Will China also go down that route in the future?

Israel is using AI to provide recommendations for targets. A lot of states will likely adopt that into their practices, but with still humans in the loop and making the final call. We are more likely to use AI as one input tool among many, rather than completely ceding authority to AI systems in targeting decisions.

Your 2024 article “Keep Your Enemies Safer: Technical Cooperation and Transferring Nuclear Safety and Security Technologies” argues “robust technical cooperation is crucial to building the trust for scientists to transfer tacit knowledge,” including those pertaining to nuclear tests and security. You briefly discuss nuclear technology sharing during the Cold War. The Soviet Union and the United States shared some nuclear safety information, but the United States did not want to share some of the same technologies with China after it also became a nuclear-armed power. Fast forward to today, what possibilities are there for China and the United States to cooperate in emerging fields like AI safety and security, which is taking a back seat amidst an escalating AI arms race?

It is important to take lessons learned from nuclear safety and security because in those areas, it was within the U.S. national security interests that their fiercest rivals had access to nuclear safety and security technologies. An accidental or unintentional nuclear detonation anywhere is a threat to peace everywhere. Some of the same logic applies to AI as well. One of the crucial lessons from the historical case studies is that there needs to be an established and robust basis of scientist-to-scientist cooperation. This type of technical cooperation is needed to cultivate trust between China and the United States and to implement any form of cooperation on AI safety and security techniques. The concrete recommendation from the paper is for both sides to prepare and to cultivate those channels of technical cooperation. If we see very transformative AI capabilities, and there is a window of opportunity to work together on safety and security technologies and issues, then we will already have a framework at the level of scientist-to-scientist cooperation in place. It is mainly a trust issue and how we navigate it.

With the growing tariff war, could it be a little bit difficult to build trust, especially if the government policies on one side are rather unpredictable?

Yes, but I think that's why it is important to look back at the historical example of the Cold War. The Soviet Union and the United States were locked in an existential struggle. There certainly were big trust issues at the geopolitical level and at the level of political leadership between the two countries. But even in that context, there are opportunities for lower-level trust building exercises.

It is easy to see how a nuclear detonation anywhere is not good for anyone everywhere. When it comes to AI safety and security, would it also be obvious, since depending on what a military is specifically using AI for, when an AI system is not acting like it is supposed to, there might not be a very observable effect in real life as a nuclear detonation?

The best example here would be if a military drone with autonomous weapons systems either accidentally or is hacked to unintentionally enter contested waters when its country's operating military does not want it to do so. Those are types of scenarios that China, the United States, and other countries would want to avoid and to make sure there are protocols in place to verify, to communicate, and to mitigate the risk of that scenario happening.

In “Machine Failing: How Systems Acquisition and Software Development Flaws Contribute to Military Accidents,” you suggested that the US military's acquisition process failed to adequately involve the end users, that is, the military. It is mainly the industry that is effectively in control of the procurement process, even though companies do not use their weapons systems to fight wars, the military does. We do not want to shoot down unarmed innocent passenger aircrafts. What are some reforms that can address flaws in the military acquisition process and improve the international security environment at the same time?

In the paper, I also look at possible risk scenarios associated with AI in which AI software could contribute to military accidents like accidentally shooting down a civilian airliner. The reforms and recommendations I propose include targeting the software development life cycle, which is the pathway by which software is developed. One of the concrete recommendations is to transition away from waterfall development models, where the end user input from the military operator occurs only at the end of software development, when it is hard to rework the system design. Instead of the waterfall model architecture, we should move towards a more agile software development pattern that allows for early prototyping, more input from military operators early in the process, and more human friendly interfaces. This enables the military to discover unanticipated or known vulnerabilities, which will hopefully create safer military AI applications.

Are you suggesting that by involving the military earlier on in the development process, it will allow the military to become better involved in the acquisition process? As a result, equipment will be developed in a way more suited to the military's needs right from the start, as opposed to trying to figure out how to suit military needs after the development process is completed?

Yes, and one example is incorporating the end users in any software project is going to help make the end interface more human friendly. A software can work perfectly, but it can be presented to the end user in a way that is unwieldy or unclear. For example, the menus might not make sense, or you might have to click through too many sub-menus to get to the information you want. The touch screen might be too sensitive, so it leads to many accidental inputs. In all these factors, the software code might be running perfectly, but the human-machine interface is not working smoothly. Early input and early feedback channels from military operators might help address some of those issues. It is a matter of how you co-develop software technologies with not just software engineers and contracting companies but also with the people who are going to end up using these technologies on the battlefield.

Kevin Wang '27Student Journalist

Photo Courtesy of U.S. Army, Public domain, via Wikimedia Commons

Share this:

Leave a Reply

Your email address will not be published. Required fields are marked *