[Notes] “Humans should teach AI how to avoid nuclear war—while they still can”
Great article on the Bulletin of the Atomic Scientists discussing how the possibility of AI supporting nuclear planning processes exposes underlying assumptions and risks in decisions about nuclear weapon employment
brought up a cool 1983 movie “WarGames” re dangers of autonomous AI making life-or-death decisions without human oversight, reminding me of a recent Mission Impossible dead reckoning scene
The film features a self-aware AI-enabled supercomputer that simulates a Soviet nuclear launch and convinces US nuclear forces to prepare for a retaliatory strike. The crisis is only partly averted because the main (human) characters persuade US forces to wait for the Soviet strike to hit before retaliating. It turns out that the strike was intentionally falsified by the fully autonomous AI program. The computer then attempts to launch a nuclear strike on the Soviets without human approval until it is hastily taught about the concept of mutually assured destruction, after which the program ultimately determines that nuclear war is a no-win scenario: “Winner: none.”
learned that military is increasingly leaning on AI and its consequences: for example, AI might incorrectly deem a nuclear war winnable or justify existing biases in military strategies; AI might increase risk of nuclear escalation
even if US officials say that an AI system would never be given US nuclear launch codes or the ability to take control over US nuclear forces, AI-enable tech is increasingly given the chance to influence nuclear targeting
article calls for transparency in AI integration and train decision-makers esp in military to understand AI limitations
in general, nuclear war should never be fought