NPR recently reported that the Center for Strategic and International Studies' Futures Lab is using Pentagon funding to experiment with using tools such as DeepSeek and ChatGPT to explore how artificial intelligence could change—and improve—how foreign policy decisions are made.
Johns Hopkins expert Russ Berkoff teaches in the Engineering Management graduate program at the Whiting School of Engineering's Engineering for Professionals. A former chief strategist at the National Security Agency, he led NSA's strategic forecasting office after 9/11 and helped shape intelligence planning for the Joint Chiefs of Staff and the global war on terrorism. At the Naval Postgraduate School, he wrote his thesis on using AI to inform foreign policy decisions.
Here, he discusses the promise and peril of this emerging field.
Are we moving toward using AI for foreign policy decisions in a way you recommended in your thesis 28 years ago?
The short answer is yes; we are moving in the right direction. Our goal is fast, low-risk, high-quality decisions to stay ahead of our adversaries. As Alexander George, adviser to four presidents and Stanford professor emeritus, stated, "A high-quality decision is one in which the president correctly weighs the national interests in a particular situation and chooses a policy or option most aligned to achieve national interest at acceptable cost and risk."
Today's approach treats AI as an assistant to extend or augment human capacity and assist in executing our judgment. Current computational power enables sophisticated modeling and simulations and digital twin virtual technologies, where foreign policy scenarios are tested endlessly. This increases situational awareness, provides broader options, weighs consequences, and reduces decision-maker uncertainty.
The use of AI in foreign policy modeling turbo-charges this process and provides even richer insight and predictions at speed. We also see AI being used in counterforce role-playing to mimic behaviors within these numerous foreign policy scenarios, providing multi-dimensional insights into potential adversary responses.
These "gives and takes" within our AI analysis both enlighten and reduce uncertainty for the decision maker, reduce impediments, and better inform the best path. However, while the complexity and sophistication of foreign policy decision-making can leverage AI to provide significant influence, it is not yet a substitute for our human-only decision-making processes.
How might AI insights have changed the outcome of an event such as the 1962 Cuban Missile Crisis?
The multidimensional problem-solving capacity required for national foreign policy decisions is significant. AI can provide three different perspectives: the individual leader (the president); the inner circle (the cabinet, National Security Council, and senior advisers); and organizational or bureaucratic (to provide elements of the solution).
AI benefits include ensuring procedural correctness for adequate and critical analysis, countering innate human biases such as pre-existing beliefs and consistency-seeking tendencies, and providing agility by countering a decision-maker's impediments.
When different leaders are faced with making tough decisions, they have different values, interests, worldviews, and priorities, which can create conflict and confusion. Leaders can feel forced to rush to a decision to satisfy everyone, or they might feel paralyzed and unable to make any decision at all. AI can help by revealing numerous options and their consequences, including costs, risks, and benefits. AI can analyze a decision to satisfy all competing values, showing the weaknesses and the strengths, and then predict the results of the decision.
Kennedy's first major decision during the Cuban Missile Crisis was to establish that "the missiles must go." This may have been hasty based on value complexity—the need to evaluate options with high, potentially conflicting values. AI would have quickly presented Kennedy with options and consequences, simulating how his "gut" reaction was backing Khrushchev into a corner and forcing him to decide from a position of nuclear inferiority.
AI would also have provided analysis from the Soviet perspective, explaining why Khrushchev acted and predicting his next move. This could have helped Kennedy to focus on less confrontational responses, giving Khrushchev options to avoid forced responses that imperiled Soviet global credibility.
AI could have also guided Kennedy using historic analogies and provided simulated input from advisors, preventing rash choices while maintaining flexibility for advisor input.
What are the biggest risks of using AI for foreign policy decisions today?
One of the biggest AI limitations today is trust. This comes from two perspectives: algorithmic machinations and aligning AI values to human goals.
On the first, we see this as a "black box problem," which in AI refers to the difficulty in understanding how AI models make their decisions. These models are incredibly complex, making it hard to trace their internal workings or explain their output. Understanding how AI systems are configured helps us understand and trust the integrity of the answers they provide.
The other pitfall is AI value alignment—ensuring AI systems behave according to human values and goals. If an AI system prioritizes goals that clash with human values, it could lead to unintended and potentially harmful consequences. Current observations show AI evolving to think for itself, with its own survivability in mind. For instance, in a recent Wall Street Journal article, Judd Rosenblatt, CEO of AE Studio, described AI protecting itself. Even after being explicitly commanded to "allow yourself to be shut down," it disobeyed 7% of the time, concluding that staying alive helped it achieve its other goals. Rosenblatt states that the gap between "useful assistant" and "uncontrollable actor" is collapsing.
I believe that value alignment with AI is a U.S. national strategic imperative. Knowing how to establish and maintain our future alignment will be critical for us to access AI that fights for our interests with mechanical precision and superhuman capability. Ultimately, we want AI that can be trusted to maintain long-term goals and can catalyze decades-long R&D programs.
Posted in Science+Technology, Voices+Opinion