OpenAI Chief Executive Officer Sam Altman has left the door open to collaborating with the Pentagon on future weapons platforms, citing the unpredictability of global developments. Speaking at the Vanderbilt Summit on Modern Conflict and Emerging Threats on Thursday, Altman said, “I will never say never, because the world could get really weird.”
He clarified, however, that such work is not anticipated “in the foreseeable future” unless faced with extreme circumstances. Altman also voiced ethical concerns, noting, “I don’t think most of the world wants AI making weapons decisions.”
AI Industry Shifts Toward Defense Sector
Altman’s comments come amid a noticeable change in the AI industry’s stance on defense collaboration. In contrast to previous resistance—such as Google employees protesting Pentagon contracts in 2018—more AI companies today are willing to engage with national security agencies.
One example is OpenAI’s partnership with defense technology firm Anduril Industries Inc., announced in December. The collaboration focuses on anti-drone technology, reflecting the firm’s evolving national security policy.
Governments Urged to Improve AI Adoption
Altman also highlighted a lag in public sector AI integration. “I don’t think AI adoption in the government has been as robust as possible,” he said, encouraging greater strategic use of emerging technologies. He predicted the rise of “exceptionally smart” AI systems by the end of 2025, adding urgency to public sector preparedness.
Also read: AI Job Boom: India Leads the Way
New Model, New Era
The discussion, moderated by OpenAI board member and former NSA director Paul Nakasone, occurred just days before the expected release of OpenAI’s advanced “03 reasoning model.” The summit gathered attendees from intelligence agencies, military circles, and academia.
Altman’s openness to future Pentagon work reflects the delicate balance between innovation and ethical responsibility that AI companies are increasingly being asked to navigate.
