Will Generative AI Models Concentrate Power Like Nuclear … – Foreign Policy

Both the private and public sectors need to play a part.
Success isn’t just staying ahead of China.
Paul Scharre, Stanley McChrystal, Alondra Nelson, and more thinkers on the dawn of a new age in geopolitics.
Data lets us see with unprecedented clarity—but reaping its benefits requires changing how foreign policy is made.
Essay: AI’s Gatekeepers Aren’t Prepared for What’s Coming
Create an FP account to save articles to read later and in the FP mobile app.

ALREADY AN FP SUBSCRIBER?
New technologies can change the global balance of power. Nuclear weapons divided the world into haves and have-nots. The Industrial Revolution allowed Europe to race ahead in economic and military power, spurring a wave of colonial expansion. A central question in the artificial intelligence revolution is who will benefit: Who will be able to access this powerful new technology, and who will be left behind?
New technologies can change the global balance of power. Nuclear weapons divided the world into haves and have-nots. The Industrial Revolution allowed Europe to race ahead in economic and military power, spurring a wave of colonial expansion. A central question in the artificial intelligence revolution is who will benefit: Who will be able to access this powerful new technology, and who will be left behind?

This article appears in the Summer 2023 print issue of FP. Read more from the issue.
Until recently, AI has been a diffuse technology that rapidly proliferates. Open-source AI models are readily available online. The recent shift to large models, such as OpenAI’s ChatGPT, is concentrating power in the hands of large tech companies that can afford the computing hardware needed to train these systems. The balance of global AI power will hinge on whether AI concentrates power in the hands of a few actors, as nuclear weapons did, or proliferates widely, as smartphones have. 
Access to computing hardware creates haves and have-nots in this new era of AI. Frontier AI models such as ChatGPT and its successor, GPT-4, use massive amounts of computing hardware. They are trained using thousands of specialized chips running for weeks or months at a time. Production of these chips and the equipment used to manufacture them is limited to a few key countries: Taiwan, South Korea, the Netherlands, Japan, and the United States. That means these nations have veto power over who can access the most cutting-edge AI capabilities. The United States has already weaponized this dependency to cut off China’s access to the most advanced chips. 
States responded to the challenge of the nuclear age by controlling access to the materials needed to make nuclear weapons. By limiting countries’ access to weapons-grade uranium and plutonium, the international community has slowed nuclear proliferation. Control over the specialized hardware needed to train large AI models will similarly shape the global balance of power. 
FP LIVE | JUNE 28: Who will win the AI race? How will it impact global trade, sanctions, and great-power competition? Join Paul Scharre, the author of Four Battlegrounds: Power in the Age of Artificial Intelligence, in conversation with FP’s Ravi Agrawal as they discuss this essay. Register to join.

The deep learning revolution began in 2012, and as it moves into its second decade, there are several important paradigm shifts underway. New generative AI models such as ChatGPT and GPT-4 are more general purpose than prior narrow AI systems. While they do not (yet) have the generality of human intelligence, they can perform a diverse array of tasks. GPT-4 achieves human-level performance on the SAT, GRE, and the Uniform Bar Exam. The AI agent that beat top human player Lee Sedol in the Chinese strategy game Go in 2016, AlphaGo, could only play Go. It could not hold a conversation, write a poem, analyze an image, play chess, craft recipes, or write computer software. GPT-4 can do all of these things and more. 
These new, general-purpose AI models have the potential for widespread societal benefit, but they can also cause real harm. Large language models are already capable of generating disinformation at scale, but future harms are potentially much worse. Language models can be used to generate software and assist in cyberattacks. They can synthesize chemical compounds and can also aid in building chemical or biological weapons. Their general-purpose abilities make these models inherently dual-use, with both civilian and military applications.
While current models have limitations, AI systems are rapidly improving with each generation. Researchers are increasingly empowering AI models with the ability to access and use external tools, such as logging on to the internet, interacting with other AI models, and using remote “cloud labs” to conduct scientific experiments—all force multipliers for AI capabilities. Some researchers worry about even greater risks, such as an AI model demonstrating power-seeking behavior, including acquiring resources, replicating itself, or hiding its intentions from humans. Current models have not demonstrated this behavior, but AI capability improvements are often surprising. No one can say for certain what AI capabilities will be possible in 12 months, much less a few years from now. 
What is clear is that current state-of-the-art AI models are not safe and no one knows how to reliably make them safe. OpenAI has attempted to train ChatGPT and GPT-4 to not give users information that could be used to cause harm—with mixed success. In one experiment, GPT-4 refused to synthesize mustard gas, but it was willing to synthesize chlorine and phosgene gas, chemical weapons used in World War I. Even when AI models correctly refuse to perform a harmful task, users can often “jailbreak” the model through simple tricks, such as asking it to simulate what a bad actor would do. As AI capabilities increase and as access to them proliferates, there is a serious risk of malicious actors using them for cyberattacks or chemical, biological, or other attacks.
What is clear is that current state-of-the-art AI models are not safe and no one knows how to reliably make them safe.
Given these and other risks, a growing chorus of voices are calling for AI regulation. Leading AI
researchers recently advocated a six-month pause on developing next-generation AI models because of the risk of societal harms. Others have argued that improvements in AI capabilities must stop entirely. The heads of all of the leading AI labs recently signed an open letter warning that future AI systems could pose an existential risk to humankind. The European Union is drafting AI regulations. In May, U.S. President Joe Biden met with the CEOs of top AI labs to discuss safety practices, and the U.S. Senate held a hearing on oversight of AI.
Though many AI regulations will be industry-specific, general-purpose AI models require special attention because of their dual-use capabilities. Nuclear technology is also inherently dual use, but society has found ways to balance the positive benefits of nuclear energy with the risks of nuclear weapons proliferation. Society must similarly find approaches that harness the benefits of AI while managing its risks.
A key way to reap the benefits of AI while reducing its risks is to control access to the computing hardware needed to train powerful AI models. Machine learning algorithms are trained on data using computing hardware in the form of chips. Of these technical inputs—algorithms, data, and computing hardware—hardware is the most controllable. Unlike data and algorithms, chips are a physical resource that can be controlled. The most cutting-edge AI models, such as ChatGPT, are trained using massive amounts of specialized chips. Without vast amounts of these specialized chips, one cannot train these powerful AI models. Hardware supply chains have multiple strategic chokepoints. The most advanced chips are produced in Taiwan and South Korea, and they can only be made using equipment from Japan, the Netherlands, and the United States. These five countries control global access to the most advanced chips.
Hardware is already a barrier to accessing frontier AI models for all but a few actors. Unlike in the space race or the Manhattan Project, the leading actors in AI research are not governments but private companies. Only a handful of companies—OpenAI, Google, Microsoft, Anthropic, and Meta—are competing to develop or field the most capable AI models. As they do so, these companies are spending billions of dollars to build ever bigger and more computationally intensive AI models. The amount of computing hardware used in training cutting-edge machine learning models has increased by a factor of 10 billion since 2010 and is doubling about every six months. (Growth in computing hardware to train the largest models is doubling about every 10 months.) This is much faster than the 24-month doubling in chip performance seen since the 1970s, sometimes characterized as Moore’s law. This growth is also much faster than hardware improvements alone, so AI labs are making up the difference by buying more chips. As a result, costs for training high-end AI models are skyrocketing. Independent estimates put the cost to train some of the largest models in the tens of millions of dollars. OpenAI CEO Sam Altman recently estimated that training GPT-4 cost more than $100 million. Tech companies are throwing billions of dollars at AI. After the success of ChatGPT, Microsoft announced a $10 billion investment in OpenAI. Anthropic reportedly plans to spend $1 billion to train its next-generation AI model.
This race to spend on computing hardware is segregating the AI community, concentrating power in the hands of the few companies training the most advanced models. Academics are locked out of accessing cutting-edge AI models because they can’t afford to train them. Major tech companies, on the other hand, have deep pockets. They have the spending capacity to invest tens of billions of dollars per year on major tech projects if they see a payoff. If current trends continue—if companies keep investing and AI keeps improving—computing power could increase many more orders of magnitude in the next decade. The field of AI could be headed into a world where a small number of major tech companies are the gatekeepers to extremely powerful AI systems and everyone else depends on them for access. 
A visitor walks past a booth featuring semiconductor chip testing during an exhibition at the Taipei World Trade Center in Taiwan on Sept. 30, 2009. Patrick Lin/AFP via Getty Images
Given the stakes, it’s not surprising that the geopolitics of AI hardware is also heating up. In October 2022, the Biden administration issued export controls on the most advanced AI chips and semiconductor manufacturing equipment to China. While the most advanced chips are not made in the United States—they are manufactured in Taiwan and South Korea—those chips are made using U.S. tools, such as the specialized software used to produce chips, giving the United States unique leverage over who can buy them. The United States imposed extraterritorial restrictions on Taiwan and South Korea using U.S. equipment to manufacture advanced chips destined for China, even if the chips themselves had no U.S. technology. Additional U.S. export controls on semiconductor manufacturing equipment deny China the equipment needed to produce its own advanced chips. 
This March, Japan and the Netherlands announced similar export controls on advanced chip manufacturing equipment to China. Collectively, the United States, the Netherlands, and Japan control 90 percent of the global market for semiconductor manufacturing equipment. For the most advanced equipment of all—extreme ultraviolet lithography machines used to make leading-edge chips—a single Dutch company, ASML, holds a monopoly. If the three countries cooperate, they can deny China the equipment needed to produce advanced chips. Coupled with U.S. export controls on the chips themselves, these measures aim to lock China out of buying or building the chips needed to train the largest AI models. 
The field of AI is transitioning to an era in which compiling the most advanced semiconductors is more like possessing highly enriched uranium.
As AI models become more capable and rely on ever increasing amounts of computing power, AI hardware is poised to become a global strategic asset. Semiconductors today are a foundational technology that is embedded into all manner of digital devices, such as phones, cars, and internet-connected devices. But the changes underway suggest a different trajectory. The field of AI is transitioning to an era in which compiling the most advanced semiconductors is more like possessing highly enriched uranium—a global strategic asset that is difficult to acquire but gains access to powerful new capabilities. 
The United States and its allies have a major advantage in this new contest. Their control over the technology needed to manufacture advanced chips is like having the opportunity to control global uranium production in 1938. Yet there are other forces at work, in the form of technology, market incentives, and geopolitics, that could cause this control to evaporate. 
The greatest dangers come from proliferation. AI risks are more easily managed when only a few actors can access the most capable AI systems. As powerful AI models proliferate, they are more likely to end up in the hands of actors who are less safety-conscious or who want to cause harm. Controlling hardware is a critical step, but that alone is not enough to limit proliferation. Trained models can proliferate easily. Once they have been released, they can easily be modified or misused. Managing AI risks requires a comprehensive approach to protecting powerful models so they do not fall into the hands of malicious actors. 
A powerful asymmetry runs through the technical realities of how AI systems use chips. Cutting-edge AI models require immense amounts of computing hardware for training, but once the model has been trained, it uses orders of magnitude less computing resources to run (a process known as “inference”). This means that access to large amounts of computing hardware is a barrier to training new models—but not for using trained models, making AI much easier to proliferate than nuclear technology.
Trained models are software. They can be leaked, stolen, or released open-source online. One of Meta’s most advanced AI models recently leaked online. In February, Meta announced LLaMA, a new state-of-the-art large language model. To help bridge the hardware divide for academic researchers who don’t have the resources to train language models at the same scale as big tech companies, Meta shared the model “on a case-by-case basis” with members of the AI research community. Within a week, the model had leaked on 4chan, making it no longer possible for Meta to carefully manage access. A week and a half later, AI researchers had compressed the model to run on a laptop and even a smartphone. The comparatively small hardware requirements for running models mean that once a model has been released, hardware is no longer an effective constraint on proliferation. 
Read More
Both the private and public sectors need to play a part.
Success isn’t just staying ahead of China.
Data lets us see with unprecedented clarity—but reaping its benefits requires changing how foreign policy is made.
Once released, trained models can easily be modified or misused. Trained models can be “fine-tuned” for specific tasks, such as fine-tuning a language model to code software. Fine-tuning can be done relatively cheaply. After LLaMA’s release, researchers at Stanford University fine-tuned a version they dubbed Alpaca for less than $600. AI researchers can also strip off embedded safety features, removing guardrails against abuse. When the start-up Stability AI released the open-source image generation model Stable Diffusion last August, within hours users had disabled the NSFW content filter and invisible digital watermarking on the images. Once a model is in the open, there is no way to prevent its misuse.
Despite these risks, there is a strident community of open-source advocates who actively push for greater proliferation of AI models. The AI community has had a long history of open-source collaboration, with data sets, trained models, and AI tools shared freely in online repositories such as GitHub and Hugging Face. OpenAI dropped a bombshell on the AI community in 2019 when it temporarily restricted the release of its then-cutting-edge language model, GPT-2, one of the first companies to shift to a more closed approach. Since then, other leading AI labs (with the notable exception of Meta’s) have followed OpenAI in not releasing their most powerful models. The result has been a backlash in the AI community, with companies such as Stability AI championing an open-source approach and releasing their models freely so that others can benefit. Open-source equivalents of GPT-3 and the image generation model DALL·E were released after 14 and 15 months, respectively. The most cutting-edge AI models, such as GPT-4, are restricted, but open-source models are not far behind. 
The ease of transferring and modifying trained models has worrisome implications for controlling proliferation of potentially dangerous AI models. National security analysts have worried about the dangers of “loose nukes” and the potentially devastating consequences of terrorists getting their hands on a nuclear weapon. Countering nuclear proliferation requires controlling both the underlying technology used to make nuclear weapons and the finished weapons themselves. But possessing one nuclear weapon doesn’t give an actor the ability to make more copies and share them with millions of people over the internet. Because they are software, AI models can be easily copied, modified, and transferred. The release of trained AI models could render irrelevant attempts to control hardware at the training level. To prevent against misuse of powerful models, comprehensive controls must encompass trained models as well. 
A woman views a high-tech art installation in Istanbul on May 6, 2017. The “Archive Dreaming” installation by artist Refik Anadol uses AI to visualize nearly 2 million historical Ottoman documents and photographs from the Salt Research archive.Chris McGrath/Getty Images
Unlike in the nuclear age or space race, the development of powerful, dual-use AI is being led by private companies. Governments need to get off the sidelines. They shouldn’t compete with the private sector to train large models, but governments are needed to create regulatory structures to ensure powerful AI models are safe and secure. AI researchers worry about the “alignment problem”—that is, ensuring that an AI system’s goals are aligned with human values. But the incentives of corporate actors are not fully aligned with the public good either. 
The current state of AI competition in the private sector has several unhealthy dynamics. Google, Microsoft, and OpenAI are engaging in a “race to the bottom” on safety, deploying AI models before they are fully safe. Other companies, such as Meta and Stability AI, have shared models widely, enabling rapid proliferation without adequate safeguards in place against misuse. But AI is not the first industry where profit motives have undermined the public good. Government regulation has enabled clean air and water, safe food and drugs, and safe highways and air travel. Government regulation of AI is needed to ensure the most powerful dual-use AI models are built and deployed safely and to reduce proliferation risks.
Global AI governance begins at the hardware level. Hardware is the most controllable input to building AI systems, and massive amounts of hardware are required to train the most capable AI models. The U.S. government has already placed controls on advanced chips and chipmaking equipment, but these controls will not be effective in limiting proliferation without export controls on powerful trained models as well. Limiting the proliferation of powerful AI models requires securing hardware across the entire
lifecycle of AI production: chips, training, and trained models.
The United States and its allies have begun taking steps to lock down access to advanced chips, but additional measures are needed. Without adequate enforcement, export controls on chips will be ineffective. Chips can be diverted or sold through intermediaries. The Chinese AI firm SenseTime, which has been blacklisted by the U.S. government for human rights violations, reportedly gained access to prohibited chips through third parties. Increased government resources and new tools for chip tracking are essential to ensure that banned actors cannot accumulate large amounts of controlled chips. 
Computing hardware must also be controlled at data centers where it is used for training models. Another Chinese firm blacklisted for human rights abuses, iFLYTEK, reportedly circumvented U.S. controls by renting chips in data centers, rather than buying them outright. Current U.S. export controls apply only to chip sales. They do not restrict cloud computing companies from providing chips as a service, a loophole that could allow prohibited actors to access computing resources through cloud providers. Governments should institute “know-your-customer” requirements, similar to those for the financial industry, for cloud computing companies to prevent illicit actors from training powerful AI models. 
Export controls on chips will be meaningless if banned actors can simply acquire trained models.
Government oversight and regulation of large-scale training runs will also be needed. AI companies training powerful AI models should be required to report to the government information about their training runs, including model size and design, data sets used, and the amount of computing power used in training. Over time, as safety standards develop, a government licensing regime may be required for training runs that are likely to result in sufficiently capable dual-use AI systems, such as above a certain threshold of computing power. 
Once trained, models must be subject to rigorous testing to ensure they are safe before deployment. AI companies should be required to conduct a risk assessment and allow third-party experts to “red team” the model, or test it to identify vulnerabilities and potential harms prior to deployment. Leading AI labs are already adopting some of these
practices. OpenAI brought in more than 50 outside experts for months of red-teaming before deploying GPT-4. Potential harms assessed included generating disinformation, aiding in creating chemical or biological weapons, conducting cyberattacks, and the emergence of power-seeking behavior such as self-replication or acquiring resources. OpenAI then applied mitigation measures to improve model safety before deployment. Despite these precautions, public deployment of the model elicited further vulnerabilities as users found ways to “jailbreak” the model and circumvent safeguards, permitting a wide range of behaviors from telling offensive jokes to synthesizing chemical weapons. Testing can improve model safety, although at present there are no means to make models robustly safe against misuse.

Industry and government must work together to develop safety standards and best practices. Early engagement between the White House and frontier AI labs is promising. Following Biden’s meeting with top AI lab CEOs, the White House announced that several leading AI developers would participate in an independent, public evaluation of their systems. Independent, third-party audits of AI lab practices can also help provide public assurance of lab safety compliance.
Transparency is essential to help society understand, anticipate, and respond to the risks of powerful AI models. OpenAI published a “system card” simultaneous with GPT-4’s public announcement that outlined the results of testing for various harms. Public awareness of the risks of AI models, consistent with responsible disclosure practices that allow companies to first mitigate vulnerabilities, can help improve safety and societal resilience. 
The exponential pace of AI progress—and growing risks—is out of pace with government action.
Trained models must also be secure against theft or proliferation. Stringent cybersecurity measures will be required to prevent theft or leaks. The Biden administration recently announced that government cybersecurity experts are collaborating with top AI labs to help secure their models and networks. Export controls may be required on powerful, dual-use models. Export controls on chips will be meaningless if banned actors can simply acquire trained models. 
In some cases, restrictions may be required for how models are used to prevent abuse. “Structured access” is one potentially promising approach, where AI services are provided through the cloud and the model itself is not disseminated. Use can be monitored to ensure that models are not being used for illicit purposes, such as cyberattacks.
AI technology is racing forward, and governments must move faster to keep up. The exponential pace of AI progress—and growing risks—is out of pace with government action. Excessive regulation will stifle industry, but moving too slow may lead to societal harms and even a backlash against AI deployment. Caution is warranted at the frontier of AI development. Governments must work closely with AI labs to mitigate against a race to the bottom on safety and rapid proliferation of potentially harmful systems.
U.S. Secretary of State Antony Blinken speaks during the 10th review conference of the parties to the Nuclear Nonproliferation Treaty at the United Nations in New York City on Aug. 1, 2022. Angela Weiss/AFP via Getty Images
International cooperation will be needed to effectively govern powerful AI systems. Altman, OpenAI’s CEO, recently advocated for the creation of an “IAEA for AI,” a global AI regulatory regime akin to the International Atomic Energy Agency that governs nuclear technology. A common objection in Washington to regulating frontier AI development is that China won’t be inhibited by such regulations and will simply race ahead. In May, Defense Department Chief Information Officer John Sherman argued against the proposed six-month pause on developing AI systems more powerful than the current state-of-the-art model, GPT-4, stating, “If we stop, guess who is not going to stop? Potential adversaries overseas.” 
The reality, though, is that the United States and its allies control the underlying hardware needed to train powerful AI systems. Potential adversaries don’t need to agree for an AI nonproliferation regime to be effective. Chinese labs are not far behind top U.S. and British labs today, but U.S. export controls may widen the gap as Chinese researchers are forced to use older, slower chips. Working with allies, the United States has an opportunity to put in place a global AI governance regime that conditions access to computing resources on compliance with safety, security, and nonproliferation practices. 
In the long run, market incentives, geopolitics, and technology improvements could undermine attempts to control proliferation. U.S. export controls incentivize foreign companies to de-Americanize supply chains, reducing their reliance on U.S. technology so that they are no longer affected by U.S. restrictions. The U.S. government must be careful to use export controls sparingly and, whenever possible, in a multilateral framework to reduce incentives for a U.S.-independent chip supply chain. China is working hard to grow its indigenous chipmaking industry. Multilateral export controls on chipmaking equipment will slow China’s progress but not halt it forever. Restrictions on trained models will likely only slow proliferation, as leaks or theft causes models to spread over time. Improvements in algorithms may reduce the hardware needed to train powerful AI models, enabling proliferation. Yet slowing proliferation can still be valuable. 
Potential adversaries don’t need to agree for an AI nonproliferation regime to be effective.
Nuclear nonproliferation efforts have not stopped proliferation entirely, but they have slowed it dramatically and have been successful in limiting the number of nuclear-armed states. Nonproliferation has also succeeded in permitting the spread of nuclear technology for peaceful use. Thirty-two countries operate civilian nuclear reactors, far more than the nine nuclear-armed states. 
Controlling the spread of dangerous AI capabilities could buy time for improved safety standards, societal resilience, or improved international cooperation. Controls for AI hardware can start with the United States and its allies but should expand over time. Cooperation with competitor nations to ensure AI is developed safely should not be ruled out. The Chinese government has actually moved faster than the U.S. government in regulating AI, enacting regulations on deepfakes in January and publishing draft rules on generative AI in April. International cooperation on nuclear nonproliferation has evolved over time, with additional refinements added in response to global problems. The IAEA Additional Protocol was approved in 1997, more than 50 years after the first atomic test. Global AI governance will similarly evolve over time. Regulations must adapt with the technology, but this should not be an excuse for inaction. AI technology is moving quickly, and solutions are urgently needed. 
This article appears in the Summer 2023 issue of Foreign Policy. Subscribe now to support our journalism.
Paul Scharre is the vice president and director of studies at the Center for a New American Security and author of Four Battlegrounds: Power in the Age of Artificial Intelligence. Twitter: @paul_scharre
Commenting on this and other recent articles is just one benefit of a Foreign Policy subscription.
Already a subscriber? .

View Comments
Join the conversation on this and other recent Foreign Policy articles when you subscribe now.

Not your account?
View Comments
Please follow our comment guidelines, stay on topic, and be civil, courteous, and respectful of others’ beliefs.


The default username below has been generated using the first name and last initial on your FP subscriber account. Usernames may be updated at any time and must not contain inappropriate or offensive language.


NEW FOR SUBSCRIBERS: Want to read more on this topic or region? Click + to receive email alerts when new stories are published on Geopolitics, Science and Technology
A wave of fresh humiliations has the Kremlin struggling to control the narrative.
De-dollarization’s moment might finally be here.
A former U.S. ambassador, an Iran expert, a Libya expert, and a former U.K. Conservative Party advisor weigh in.
China, Russia, and their autocratic friends are leading another epic clash over the world’s largest landmass.
Sign up for World Brief
By submitting your email, you agree to the Privacy Policy and Terms of Use and to receive email correspondence from us. You may opt out at any time.
Your guide to the most important world stories of the day.
Essential analysis of the stories shaping geopolitics on the continent. Delivered Wednesday.
One-stop digest of politics, economics, and culture. Delivered Friday.
The latest news, analysis, and data from the country each week. Delivered Wednesday.
Weekly update on developments in India and its neighbors. Delivered Thursday.
Weekly update on what’s driving U.S. national security policy. Delivered Thursday.
A curated selection of our very best long reads. Delivered Wednesday & Sunday.
Evening roundup with our editors’ favorite stories of the day. Delivered Monday-Saturday.
A monthly digest of the top articles read by FP subscribers.
By signing up, I agree to the Privacy Policy and Terms of Use and to occasionally receive special offers from Foreign Policy.
Registered
Only FP subscribers can submit questions for FP Live interviews.

ALREADY AN FP SUBSCRIBER?
Only FP subscribers can submit questions for FP Live interviews.

ALREADY AN FP SUBSCRIBER?
Over the last few years, the United States has moved to limit China’s technological rise. U.S.-led sanctions have imposed unprecedented limits on Beijing’s access to advanced computing cShow more
Only FP subscribers can submit questions for FP Live interviews.

ALREADY AN FP SUBSCRIBER?
Only FP subscribers can submit questions for FP Live interviews.

ALREADY AN FP SUBSCRIBER?
For decades, the U.S. foreign-policy establishment has made the assumption that India could serve as a partner as the United States jostles with China for power in the Indo-Pacific region. BShow more
By signing up, I agree to the Privacy Policy and Terms of Use and to occasionally receive special offers from Foreign Policy.
Registered
Only FP subscribers can submit questions for FP Live interviews.

ALREADY AN FP SUBSCRIBER?
Only FP subscribers can submit questions for FP Live interviews.

ALREADY AN FP SUBSCRIBER?
All of a sudden, everyone is talking about artificial intelligence. But while most public conversations about AI center around productivity and jobs, the race to dominate technology is in faShow more
A comment from a senior figure about “zero-range” weapons could signal a dramatic—and dangerous—shift in Islamabad’s nuclear strategy.
Be prepared for attacks on Kharkiv and Gazprom’s fighters joining the war.
Why the American doll never made it abroad.
The German philosopher’s “Grundrisse” is an indispensable guide to our current chaos—from AI to the rise of China.

source

Jesse
https://playwithchatgtp.com