Two quick announcements before we get into it.
- ChatGPT’s Code Interpreter is absolutely incredible. You can upload datasets and in seconds without any coding experience explore what’s inside them. I’m running a chart contest this week, with the best Code Interpreter-created chart + 100 word description winning $250! Here’s something that took me a few minutes, a brief tutorial to get you started, and two lists of China-related databases: UCSD’s and NYU Shanghai’s. Here’s a form to submit.
- ChinaTalk is hiring a new part-time reporter! If you want to have a ton of fun covering modern China, and get paid doing it, consider applying here.
Money, Power, Tech Diffusion, and Glory
Jordan Schneider: Why is long-term productivity growth all that matters for great powers?
Jeffrey Ding: Technology is going to affect a range of different things we care about in great power competition. It’s going to affect the military realm. It’s going to affect how nations perceive other nations — their prestige, their soft power.
I’m most focused on economic power. It’s the most fungible aspect of power. You can transform and convert economic strength into military strength. A country that sustains its development becomes more reputable and gains soft power because of its development model.
You can see that with China and the Chinese model of development. Economic power is the most fungible and transferable currency for measuring power. Historically, we’ve seen the rise and fall of great powers occur through a pretty regular pattern: one country sustains productivity growth and economic growth at higher levels than its rivals, becomes the preeminent economic power, and then converts that into geopolitical and military strength.
Jordan Schneider: There’s a big difference between a two-year horizon and a twenty-year horizon. At that timescale, why is productivity growth really what counts?
Jeffrey Ding: In the long run, productivity growth is what sustains economic growth. Take China as an example. Its economic growth to date has been driven by a range of factors, most notably demographic advantages — a large, young workforce willing to work for low wages — and urbanization. The transition from a rural, agriculture-based economy to a more urbanized, industrial economy brings a lot of good opportunities for growth.
But those two advantages are fading as China tries to escape the middle-income trap and become an advanced, high-income economy. Economists and development scholars have found that the way to escape that middle-income trap — to sustain economic growth in the long run — is through productivity growth and adopting new technologies.
Jordan Schneider: You argue that great power status is fundamentally based on wealth. So what drives national economic growth over the long term?
Jeffrey Ding: “Great powers” is a fuzzy concept. A great power is a country that has both a large economy and an advanced economy. Those two pieces working together are important. It’s not enough to just have a very populous country with a large economy if that economy is not efficient, because that’s what sustains growth in the long run.
The rise and fall of great powers happens because new technologies create differences in economic growth. One country can sustain growth for a longer period at a higher rate on the back of new technologies, whereas another power suffers a decline.
Jordan Schneider: The timeframe of the average policymaker is about one to five years. On a five-year basis, there are a lot of different policy decisions that can have near-term impacts on economic growth. You have fiscal policy. You can juke around with monetary policy.
But if you zoom out over ten to forty years, the changes you can make on the edge with fiscal and monetary policy end up balancing out. What remains is how you absorb and adapt to new technologies.
Jeffrey Ding: We often focus on just the initial moment of innovation and assume that will make an impact within five years. If you poll the top machine learning experts today and ask them when we’ll get high-level machine intelligence, a lot of them say it will be by 2035. Some commentators say AI will change the balance of power within the next decade. That’s our horizon.
But a lot of these revolutionary advances in the past have taken decades — often four or five decades — to diffuse throughout society and actually make the impact that we think they will make.
Jordan Schneider: From a policy perspective, focusing on a twenty- or thirty-year horizon is fundamentally different from a five-year horizon. The types of things you want to bet on and the policies that are robust to different futures are different when you’re looking so far out and the future is uncertain. What’s so interesting to me about your research is just how often this has played out over time.
Atlantic Waves: Industrial Revolutions in America and Europe
Jordan Schneider: Let’s look into some case studies to illustrate just how important technology is to long-term national greatness, or to nations being able to claim their place at the top of the global food chain. What role did technology play in helping the UK supplant its rivals in the eighteenth and nineteenth centuries during the First and Second Industrial Revolutions?
Jeffrey Ding: The UK was able to sustain productivity levels higher than their rivals because they adopted iron advances, metal-working advances, and mechanization at scale more effectively than their rivals.
Often in the case of the Industrial Revolution, we gravitate toward the fast-growing new industry of cotton textiles. My argument is that it was the more gradual, protracted diffusion of iron machine-making and mechanization throughout the entire British economy. That was the real driver of Britain’s industrializing faster than their rivals like the Netherlands and France.
Jordan Schneider: What are general-purpose technologies (GPTs), and why are they so important to the rise and fall of great powers?
Jeffrey Ding: General-purpose technologies have been deemed by economic historians as “engines of growth.”
General-purpose technologies are engines because they are fundamental advances that have the potential to transform broad swaths of the economy.
There’s a whole research agenda attached to them. There’s a lot of scope for continual improvement. Crucially, they make an impact only if there are a lot of complementary technologies across many different sectors that adapt.
Electricity is held up as a prototypical example of a general-purpose technology. Electricity is able to make an impact on productivity only if you change your factory layouts from being driven by a central steam engine and a system of shafts and belts. If you change that layout to something that’s more decentralized, each machine is driven by an individual electric generator.
Electricity was only made available to all these different applications with the rise of electric utilities, like central generating stations. This was affected by other complementary technologies, like steam turbines. These general-purpose technologies take a long time to diffuse throughout the economy. They rely on different complementary advances across a wide range of economic sectors to make their ultimate impact.
Jordan Schneider: If you’re looking on a multidecadal horizon, GPTs are going to drive productivity growth. It’s not cotton textiles. It’s not the one-offs that give you significant productivity growth in one particular segment of the economy. It’s the GPTs that do it for your whole nation.
What that requires is not necessarily having Thomas Edison invent the light bulb, but it’s all the hard work necessary to hook up the entire country to electricity. Factory layouts, industrial organization, and the creation of firms are all hard, but they end up affecting the entire economy. Getting those things right — more than whatever is the hot emerging technology of the day — is what’s going to keep your country in the lead over the long term.
Jeffrey Ding: On your point about talent, we often hold up this image of the heroic inventor, like James Watt. Britain became the leading industrial power because they had the James Watts of the world to invent the steam engine.
But when economic historians have dug deeper into Britain’s source of advantage, they’ve underscored Britain’s average level of technical literacy. They had just this higher level of average technical literacy among the early versions of mechanical engineers, machinists, and those who were able to deploy the new iron-based machines in all these different industries.
Jordan Schneider: Geniuses are nice but not necessarily essential. It’s the next level down of having the implementers that live in all the different corners of the economy. They see whatever it is that’s new and exciting and then bring it into their expertise.
But let’s jump forward to industrial revolution. How did this affect the Germany versus the US?
Jeffrey Ding: The Second Industrial Revolution was unfolding in Germany, the US, and the UK from around 1870 to 1914. The version of the story that I tell here is that the US became the preeminent economic power, not just in terms of size, but through the combination of size and overtaking Britain in economic efficiency, measured by either labor productivity or GDP per capita as a measure of the development level of the economy.
The key GPT general-purpose technology trajectory was the spread of interchangeable manufacturing. It wasn’t necessarily who was exporting the most advanced chemicals at that time (which political scientists suggest is the reason Germany became a challenger to Britain). Rather, it was the spread of interchangeable manufacturing methods — what became known as the American system of manufactures — across the making of sewing machines, bicycles, and all sorts of devices in all sorts of sectors of the economy. This led to the US gaining a productivity advantage.
Other countries were ahead in terms of inventive genius, or they had the research centers of excellence — the DeepMinds of the world at that time. The US was able to diffuse this interchangeable manufacturing method because it had stronger connections between the frontier institutions, entrepreneurs, and engineers. They were able to build up a more practice-oriented mechanical engineering discipline. Part of that is due to investments in land grant universities that built up strong mechanical engineering developments all across the country, not just in the elite universities.
General-Purpose Tech and Human Capital
Teddy Collins: It seems there are at least two different categories of this diffusion potential or diffusion capability.
- One is the broad, basic literacy or “tacit knowledge” that exists throughout a population, as opposed to a handful of spiky inventors or entrepreneurs.
- The second is a willingness to experiment with and implement new approaches to infrastructure. You have to redesign workflows for physical settings. That’s a big capital expense.
Those two things seem a bit different, and it seems like they would come with different policy implications.
Jeffrey Ding: That’s a fair distinction. I tend to focus on the human capital argument: how do you build a wider base of average engineers associated with general-purpose technology?
There are a lot of other factors that drive the pace and intensity of GPT adoption, including some of the things you mentioned, such as organizational restructuring and the level of vested interest or legacy institutions that exist in some countries. This is why some people say there’s a late-comer advantage. Countries that don’t have those legacy institutions are just better equipped to implement some of these structural reforms.
It’s also hard to measure and operationalize these things in terms of how many vested interests are in a particular industry. That might change across different industries. I’m comparing advanced economies that all would probably have some level of vested interests and established structures.
My preference is to look at things that cut across industries. This might be the average level of engineering talent associated with the general-purpose technology, the degree to which universities are linked and communicating effectively with industry entrepreneurs, or the strength of dissemination mechanisms for ideas about technology.
Jordan Schneider: There’s something societally disorienting about these general-purpose technologies. Your whole system and ethos need to be ready for that.
Morison argues that in 1857 everything was in place to transition from iron to steel. But there were all of these cultural, societal, and economic reasons not to take that leap. To explain the hesitance and the multidecadal lag that it took to adopt this new technology, Morison writes:
Would it not, by replacement of an old reagent, iron, with the new element of steel, replace also the customs, habits, procedures, and hierarchical arrangements upon which the security of life in the iron trade depended? The converter, in this context, looks less like a tool of commerce and more like some catapult leveled against a walled town.
People, institutions, and nations are that walled town. They’re looking at that catapult, and they will respond to it in very different ways.
Jeffrey Ding: Exactly. That is an example of transforming just one industry to adopt an innovation, steelmaking. Now multiply that for a general-purpose technology to any industry that a GPT would affect.
It gets even more complicated and magnified in terms of the structural changes needed to adopt GPTs. The softer stuff you mentioned — culture, status, people wanting to protect the skills they’ve already developed — all of these things come into play.
Hot Tech, Cold War: US-Soviet Competition
Jordan Schneider The Cold War is another great case for your argument. The Soviet Union had these awesome research scientists. The USSR was able to do stuff in the lab and across many fields which equaled or even exceeded what America could pull off. But when the technological grounds began to shift beneath the Soviet Union — when it moved past steel-driven development and into electronics — it didn’t adapt and diffuse the technologies nearly as effectively as the US did.
Jeffrey Ding: When we assess countries’ scientific and technological capabilities, we overweight innovation capacity. These are our typical indicators:
- Who’s spending the most on R&D?
- Who has the top scientists publishing the most?
- Who’s getting the most cited patents out there in different fields?
These are the things that we gravitate toward — and we discount diffusion capacity once that groundbreaking paper has been published. Are those ideas being commercialized and spread across all these different industries after the advances come out of DeepMind? Are those then spreading to spreading from these frontier firms to the small and medium firms that are driving most of the productivity growth in the entire economy?
In the Soviet Union’s case, they performed so well on all these traditional measures of innovation capacity in terms of the most PhDs in STEM fields or the most spending on R&D. But in a 1969 CIA report I cite in my paper, their assessment was that the Soviet Union lacked these fast-acting, biological processes of diffusion.
The planned economy of the Soviet Union limited its ability for these new advances to permeate and spread throughout the entire economy. The Soviet Union was doing well in mission-oriented breakthroughs like Sputnik, but it was not an economy equipped to computerize at scale. That led to stagnant productivity growth and ultimately the collapse of the Soviet Union.
Jordan Schneider: A fun theme in your research is Americans freaking out about losing in the 1950s. Everyone was like, “The Soviets have more PhDs than we do! This is going to be terrible.”
No Illusions? Why Tech Diffusion Wasn’t So Big in Japan
Jordan Schneider: Then in the 1980s, the concern was that Japan would overtake the US. David Halberstam — author of The Best and Brightest and Breaks of the Game — wrote in 1983 that Japan’s industrial ascent was America’s most difficult challenge for the rest of the century and a “more intense competition than the previous political-military competition with the Soviet Union.”
There was a deep consensus within the American body politic that America was losing the technological future and long-term productivity race to Japan. What didn’t Japan get right?
Jeffrey Ding: This was a very real threat in the eyes of the US. Henry Kissinger wrote an op-ed in The Washington Post saying that Japan’s economic strength and rise in high-tech sectors would eventually convert into military power and threaten the US. A poll in the late 1980s found that more Americans were worried about Japan than the Soviet threat to US national security.
The trend that I see so clearly with all these historical examples is the US overhyping other countries’ scientific and technological capabilities. One reason we do that is because we don’t pay as much attention to diffusion capacity.
There is a case in my book manuscript about why Japan was not able to overtake the US. It got to about 90% of US productivity levels in terms of total factor productivity, but then it stalled in the 1990s. That’s due to a number of reasons, including fiscal and monetary policy. That’s all relevant here.
What I highlight is that Japan gained a lot of market share in a lot of these new, fast-growing industries like consumer electronics and key semiconductor components. But it fell behind the US in terms of adopting computers at scale and overall computerization rates.
I highlight deficiencies in Japan’s ability to train a large number of software engineers. They built a lot of centers of excellence at certain universities, but they weren’t able to build a wider pool of institutions to train software engineers and fill in those talent gaps that held back the diffusion of computers throughout the entire economy.
Jordan Schneider: Some profound humility gets inculcated in you when you sit down and think about 2023 and what everyone agrees on. Things can end up radically different from whatever the consensus is.
Jeffrey Ding: Our takes are all shaped by who we’re talking to, the institutions we belong to, the ideas that are circling the rooms that we’re in, and the narratives of our times. But I think looking at historical examples forces you to get out of that a little bit, out of your little mini echo chamber, and understand that maybe we’re very wrong about the assumptions we have in our social groups and among all the people we’re reading and listening to. It’s an important dose of humility.
Jordan Schneider: When thinking about the example of the 1980s and 1990s, what were the ingredients to national policy that allowed the diffusion of the Information Age to happen with such dramatic success in the US?
Jeffrey Ding: One important factor behind all this is just access to a wider pool of software engineering talent. The US was tapping into so many immigrants who wanted to come to the US and study and work in these areas. Japan was relatively closed off in terms of bringing in foreign talent and even sending students out to other universities.
Secondly, a lot of times in the wake of these new GPTs, you almost have to have a new engineering discipline.
- Mechanical engineering in response to mechanization,
- Electrical engineering in response to electrification, and
- Computer science in response to the computer.
The US university system was more decentralized, and they had the flexibility to adapt and build this new curriculum for training software engineers at scale. Japan’s system was more rigid and centralized.
Suggestions for AI Superpowers
Teddy Collins: If you were made the czar of all AI-related policy in the United States, are there specific things you would push? What will it take for AI to show up in the productivity statistics?
Jeffrey Ding: First, there’s investing in human capital. For me, that might look like widening the base of average AI engineers, people who are not necessarily training with cutting-edge models but can take an existing model and apply it to a particular scenario. Maybe they fine-tune models on a more specialized data set. Or maybe they take something that is already out there, open source, and apply it to their specific industry context. Training that talent might look like investing in community colleges and improving the capacity to train people in the general field of computer science.
Infrastructure is also in there — but for me, that’s just anything that would affect GPT diffusion, not necessarily driving cutting-edge innovation. How do you improve access to compute for a wide range of universities and even small and medium businesses?
That’s something that the national research cloud discussions have not considered much. They’ve focused more on getting high-end universities access to more computing resources and investments in institutions that encourage more technology transfer.
Some scholars advocate for voucher systems that incentivize small companies to learn and adopt new techniques from frontier firms. Subsidizing and encouraging that in some way is another step governments can take.
Jordan Schneider: One of the arguments you make is that when GPTs come online, you don’t see them in the productivity statistics until twenty years later because it takes a long time for people to wrap their heads around them.
But this time around, we already have a nationally integrated economy. We’ve figured out how to finance a lot of these institutions. We have a whole venture capital ecosystem, and everyone understands that there are enormous gains to be made from these technological innovations.
What I am worried about when it comes to diffusion is the potential policy roadblocks that could arise if change happens “too fast” and the body politic or some industries just reject it. You could end up with legislative roadblocks that make everyone worse off with lower productivity growth. The technological change which was going to come doesn’t happen. You have a poorer society because we’re not trying to make the best of these technologies that have so much positive potential.
Jeffrey Ding: This is relevant for AI because there are so many risks associated with AI systems, from misinformation to accidents. On a narrower technical dimension, there are things like misspecified reward functions that result in all these out-of-control behaviors.
There’s a case to be made for smart pragmatic regulation that can enable more sustainable development and diffusion of these GPTs. Another example is nuclear energy, which saw key accidents and safety issues derailing their adoption at scale.
Teddy Collins: I can imagine someone saying software is fundamentally different because it spreads much more easily. The marginal cost is basically zero. We’ve already seen that because we have a small number of platforms responsible for a huge proportion of software use.
Perhaps you only need a small number of people at a small number of companies who understand the cutting-edge tech and can implement something that quickly diffuses to almost everyone. Thus, broad tacit knowledge is less important.
Jeffrey Ding: Maybe my predictions about GPTs taking multiple decades to make their impact are a bit outdated in a software-based world.
One way to measure diffusion is by dating a GPT’s emergence — maybe it reaches 1% adoption in an early adopting sector. Let’s say all the big internet companies were the first to adopt AI for adjusting their search recommendation algorithms. That’s the starting date. When does it reach 50% adoption as the median across all possible adopting industries? That’s the timeframe for the specific diffusion capacity and timeline.
People have tried to measure that for electricity and information communications technologies. Nicholas Crafts has done some of this work. Other scholars have found there has been a slight decrease in diffusion time, but it’s still on the order of multiple decades.
We’ve had breakthroughs in AI now for almost a decade, with the advance of deep learning about a decade ago. The Census Bureau in 2018 asked companies across all these different sectors about the extent to which they had tested and trialed machine learning systems. The percent adoption rate was still below 3%.
There are different takes on this. Some people rightly adopt an inside view. They know a lot more about AI than me. They’re tracking everything that’s happening in AI daily. It’s exciting. They’re like, “This thing’s going to diffuse fast — let’s be ready for it. We should be thinking on a timescale within this decade.” There’s value in considering what the external view says from these past examples and historical lessons. I’m not saying one is dogmatically right, but we should have a mix. I think the balance is tilted too much toward the inside view right now, and we should have a better balance incorporating these historical insights.
The Long, Twilight Diffusion: US-China Tech Competition
Jordan Schneider: China will likely have some real challenges when it comes to getting AI diffusion right? Even if they develop roughly equal powerful base algorithms like GPT-4?
Jeffrey Ding: There’s a gap in how we measure diffusion capacity for different countries. I just went over that metric of the 1% to 50% median. That’s hard work. It’s much easier to just cite anecdotal stuff or pull a number about R&D and make broad claims.
What I did in the diffusion deficit paper is look at different global indexes on science and technology. There are hundreds of different science and technology indicators. I tried to sort the ones that trended more toward innovation capacity: top-three firms in R&D investment and top-three universities in scientific research — these are more closely tied to innovation capacity.
On the other hand, you have things like:
- How fast and to what extent have information communications technologies diffused throughout the economy?
- What is the adoption rate of cloud computing in the country?
- How strong are the linkages between universities and companies for spreading new ideas and collaborating?
Based on that exercise, I found China’s diffusion capacity lags far behind its innovation capacity. On all the indicators that are more tied to diffusion capacity, if you average them, China’s global ranking is almost thirty places — maybe more than thirty-five places — lower than its innovation capacity ranking.
Jordan Schneider: When thinking about systems competition, I often get frustrated by American policymaking. It is so diffuse. You need to make all the different representatives happy and throw everyone a bone. You have all these irrational policies, and this state is doing something that’s going against the grain of that state.
But there is something powerful about the way America is set up: our messy political system ends up not concentrating its bets too tightly.
It is awesome that we have an OpenAI and a DeepMind and all the best universities. But something that makes America special and weird — something baked into a lot of the American system — is that it’s not a centrally planned thing.
Jeffrey Ding: Decentralization is strongly tied to high levels of diffusion capacity. That’s been borne out by a lot of different econometric research and empirical work. There are things the US can be doing. I’ve testified twice in front of the US-China Economic and Security Review Commission. My first recommendation both times was that the status quo is a defensible option. The US is in a good position right now because of its decentralized science and technology system. I strongly believe that.
It’s important the US not lock into a specific technological trajectory with AI. If we were having this podcast two years ago, we might be talking more about computer vision. Now the hottest subfield is natural language processing. If we were having this podcast eighteen years ago — when Clinton announced the National Nanotechnology Initiative — we would be talking about that technology. No one talks about nanotechnology anymore, even though it might have some general-purpose technology characteristics. It’s just diffusing under the radar.
Jordan Schneider: I love this quote you have from 2003 with the Under Secretary of Technology at Commerce saying, “Nano’s potential rises to near biblical proportions.” I mean, maybe — but if America put all our eggs in that basket and wasn’t paying computer scientists to figure out AI models, then we might be at a very different place.
Jeffrey Ding: Pouring a lot of compute and resources into transformer models seems to be one bet. There are other fields of AI beyond deep learning, like reinforcement learning or causality-based thinking — models we should not ignore nor neglect in the long term.
We should not do what Japan did when they invested in their fifth-generation computing project, narrowly looking at computers as huge mainframes in a world where that was increasingly turning toward personal computing. We should avoid locking in one technological trajectory.
Jordan Schneider: How defensible is it to keep the brains or keep the IP within your national borders? Is that something that works for these general-purpose technologies?
Jeffrey Ding: No. How do you keep such a foundational technology locked up? It’s not like GPTs are pharmaceutical secrets. That’s not the model. We’re not talking about profits from one country monopolizing this super-secret innovation. It’s more about adopting these innovations at scale.
Paid subscribers get advanced access to the second part of our conversation. We discuss:
- The state of play in the race to attract talent;
- Why hyping China’s AI and tech prowess could lead to threat inflation;
- How translating Chinese sources helped Ding understand tech diffusion;
- Why war is a tempting choice for lagging great powers.
Tech Talent Tug of War
Jeffrey Ding: China’s efforts to attract talent back from overseas have helped them stay abreast of the AI research frontier and stay connected to different innovation networks. That helps facilitate the initial adoption.
But after those top Chinese scientists hear about the latest advances and learn the latest breakthroughs, how does that diffuse to the next level down and throughout the entire country? It’s hard to have indicators for that diffusion capacity in AI.
One approach I use in the book manuscript is by asking, “How many universities does a country have that meet some baseline level of quality for AI engineering education?” I look at the CS rankings database and try to figure out how many universities in China have at least one researcher who has published in the top three AI conferences.
That number is relatively low compared to the US. It’s about 100 for China, whereas the US has around 400. There’s a broader pool of US universities that meet that low bar for baseline quality in AI engineering education.
Jordan Schneider: What’s the case for allowing the free flow of talent between the US and China?
Jeffrey Ding: A lot of talent is flowing to the US and staying in the US. The flow of talent back to China is obviously helping China’s diffusion capacity. But relatively speaking, it’s arguably helping the US more.
There are a lot of other reasons why we’d want to keep those flows open, just because there are a lot of advantages to having an open economy — not discriminating on the basis of geographic origin, or nationality — when it comes to shaping immigration policy.
These things matter more than that specific policy’s effect on relative diffusion capacity levels.
Jordan Schneider: You just reached your fifth anniversary of writing a truly fantastic weekly translation roundup on Substack. What has reading all of this contemporary Chinese writing on AI given you as a researcher?
Jeffrey Ding: History forces us out of our echo chambers and preexisting biases and dispositions. Just reading what Chinese people are thinking about AI forces me to get out of the DC and academic bubble when it comes to thinking about AI and US-China competition. It’s reading a completely different set of ideas from people living in a completely different context, whether that’s blogs or government white papers.
The diffusion deficit paper we’ve talked about in detail here was very much inspired by ChinAI translations about companies trying to implement computer vision to improve machine quality inspection on production lines for cutting tools and knives. That’s one of my favorite translations that I’ve done. It speaks to these companies. These people are not the ones that make the news, like the company SenseTime, one of the most valuable AI startups in the world. They show how China trying to implement AI on a granular level. What I’ve talked about today has been inspired by and builds on those weekly translations.
Teddy Collins: There’s a cliche that China can copy, scale, and commercialize, but it cannot innovate. It’s interesting to see that when you dig down into the weeds, China’s diffusion capability is worse than its innovation capability and the US’s lead on diffusion is greater. A lot of people would find that result surprising.
Jeffrey Ding: It’s often informed by a few attractive examples. China is good at diffusing certain things, like high-speed rail or e-commerce (like food delivery apps). But when you look at some of these other innovations connected to general-purpose technology — information communications technologies, computers, cloud computing — those diffusion rates are pretty slow in terms of actually affecting productivity in a lot of different businesses.
Inclement Clouds on the Tech Horizon
Jordan Schneider: There are pluses and minuses that come with thinking China is going to take over the world versus a more realistic understanding of the country’s strengths and weaknesses in productivity and technology. You’ve got all this McCarthyism Cold War stuff about overshooting the gap.
There’s also the CHIPS and Science Act, the most aggressive thing that America has done in twenty years trying to pull some levers in manufacturing and science and technology investments. Maybe it’s not exactly what you would have done from a diffusion perspective, but that doesn’t happen unless you have politicians really worried about China’s rise as a technological power.
Jeffrey Ding: There are instrumental reasons to overstate and hype up China’s AI capabilities and its scientific and technological prowess. If I were someone who really believed in the CHIPS and Science Act as the most essential thing for US national security, the only way to get that across the board is to adopt a “China is going to overtake us” framing. There are reasons to do that and I see the rationale behind that.
There are also a lot of downsides, however, to overestimating someone as a threat. It could lead to more threat inflation. It could lead to more willingness to escalate conflicts.
We’ve seen that historically in the US-Soviet Union case — the illusory missile gap or the US and Soviet Union getting to the brink of world destruction many, many times. The truth matters too. Having a more accurate depiction of the scenario is a good thing.
Jordan Schneider: It’s interesting reflecting on this if you’re inside the head of Xi or some other Chinese policymaker. Everything I read about China and AI seems hyperbolic.
The self-flagellation of the Chinese internet over the past six months and watching ChatGPT explode in the West has been really interesting. All of a sudden, the discourse went from, “We’re going to be awesome and amazing” to, “We’re so pathetic as a country, and we need to get our act together for this long struggle.”
I can paint a positive upside for the dose of realism injected into Chinese discourse and what that’s doing for Chinese policymaking. But Beijing might ultimately believe China will be on the back foot technologically for years to come, which could drive a more competitive dynamic with the US. The odds of us coming out ahead on that are probably way worse than the expectations that are currently baked in.
Jeffrey Ding: We have shifted so far in the direction of US national security interests and the need to beat China in all these different forms of competition. The biggest risk is if China overtakes us on something, whether militarily, economically, or by soft power.
I’m not sure where I stand on this, but why are we not considering that the biggest national security risk for the US is a weak China and a China that can’t sustain its growth? For the longest time that was US State Department policy. A strong China is good for peace.
All this self-flagellation that’s been coming out in terms of China’s AI sector has been overhyped. China could also suffer economic stagnation. What would the national security consequences be for the US? They might not be good.
It’s not even in the Overton window. We’re not even talking about it anymore in Washington.
Jordan Schneider: That’s a tricky Goldilocks thing. The way I see this getting turned around is China slows down and America and all its allies keep growing nicely with their adoption of GPTs and whatnot.
Whoever comes after Xi might realize that, in fact, you need to be globally integrated and have happy diplomatic relations to have a place on the national stage and be respected and keep pace with other countries.
One of the ways to not play a multidecadal game is to start a war. If you roll the dice now in a really aggressive way, then massive wars are one way to short-circuit that long-term productivity contest.
You can get lucky or overperform in a narrower window because you have some edge or another. Though, most of the time, the countries that win wars are the ones that are larger and more technologically advanced. But it’s not every time. War can be your “out” if you talk yourself into believing there’s a hopeless long-term trajectory for you as a national leader.
Jeffrey Ding: That squares with my understanding as well.
Jordan Schneider: I feel like I’m ending every ChinaTalk on World War Three, which is a bummer…
ChinaTalk is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.
Recommend ChinaTalk to the readers of Terrys’s Newsletter
Deep coverage of China, technology, and US-China relations We feature original analysis and reporting, interviews with leading thinkers and annotated translations of key Chinese-language sources.Recommend
OCT 11, 2022 •
MAR 20 •
OCT 21, 2022 •
JUL 6 •
MAY 16 •
JUN 22 •
JUN 2 •
JUN 9 •
JUL 26 •
NOV 27, 2022 •
© 2023 Jordan Schneider
Substack is the home for great writing