OpenAI Pentagon AI Deal Controversy
Caitlin Kalinowski, OpenAI's head of robotics, resigned on March 10, 2026, citing concerns over the company's "opportunistic and sloppy" Pentagon AI deal and a lack of safety guardrails. As of March 10, 2026, OpenAI is facing internal debate and backlash regarding the deployment of AI on classified networks, despite outlining 'red lines' against mass domestic surveillance and autonomous lethal weapons. Over 30 researchers from OpenAI and Google, including Google DeepMind's chief scientist Jeff Dean, filed an amicus brief on March 9, 2026, supporting Anthropic in its legal dispute with the Pentagon over similar ethical concerns. OpenAI CEO Sam Altman stated on March 3, 2026, that the company is amending its contract with the Department of War to explicitly prohibit the intentional use of its AI for domestic surveillance or directing autonomous weapons, emphasizing that OpenAI retains full discretion over its safety stack.
Timeline
Want updates on this thread?
Track this story2026
15 updates
2026
15 updatesOpenAI's head of robotics, Caitlin Kalinowski, resigned citing concerns over the rushed Pentagon AI deal and lack of safety guardrails. The company is facing backlash and internal debate over deploying AI on classified networks, with OpenAI outlining 'red lines' against mass domestic surveillance and autonomous lethal weapons. CEO Sam Altman acknowledged the initial deal was 'opportunistic and sloppy'.
Over 30 researchers from OpenAI and Google, including Google DeepMind's chief scientist Jeff Dean, have filed an amicus brief supporting Anthropic in its legal dispute with the Pentagon. The researchers argue that the Pentagon's actions, which labeled Anthropic a "supply-chain risk" after it refused demands to remove safeguards against using AI for mass domestic surveillance and fully autonomous weapons, could harm the broader U.S. AI ecosystem.
via openai.com
OpenAI CEO Sam Altman stated on March 4, 2026, that the company does not control how the Pentagon utilizes its artificial intelligence products in military operations. This clarification comes amid scrutiny of the military's AI usage and ethical concerns from AI workers. Altman reportedly used examples like the Iran strike or Venezuela invasion to illustrate that OpenAI employees do not make operational decisions.
OpenAI is amending its contract with the Department of War to explicitly prohibit the intentional use of its AI for domestic surveillance or directing autonomous weapons. CEO Sam Altman acknowledged the initial deal was 'opportunistic and sloppy' and stated that OpenAI retains full discretion over its safety stack. The revised agreement includes specific 'red lines' and requires human approval for high-stakes automated decisions.
OpenAI is amending its contract with the Pentagon to explicitly prohibit the intentional use of its AI for domestic surveillance of US persons, following public backlash. CEO Sam Altman confirmed the changes on March 3, 2026, stating the company had been working with the Pentagon to add clearer language and that he would 'rather go to jail' than follow an unconstitutional order.
via Axios·mashable.com
The conflict highlights a broader debate within the tech industry regarding the ethical use of AI in national security, with some advocating for unrestricted military use and others warning of the erosion of democratic safeguards.
via pbs.org
President Trump's order for federal agencies to cease using Anthropic's technology includes a six-month transition period for agencies already relying on its systems. He also warned of "major civil and criminal consequences" if Anthropic does not comply during this phase-out period.
via foxbusiness.com
The Pentagon's decision to designate Anthropic a supply chain risk could impact its ability to do business with other U.S. companies, as defense contractors are now required to certify they are not using Anthropic's models.
OpenAI's agreement with the DoD includes provisions for technical safeguards and deployment on cloud networks only, with Field Deployment Engineers (FDEs) to ensure safety. OpenAI also urged the DoD to extend similar terms to all AI firms to mitigate potential legal and regulatory disputes.
via phemex.com
Anthropic announced it would challenge the "supply chain risk" designation in court, viewing it as legally unsound and a dangerous precedent for companies negotiating with the government. The company maintained its stance against using its technology for mass surveillance or developing fully autonomous weapons.
via jpost.com
Following OpenAI's agreement, the Pentagon officially designated Anthropic as a "supply-chain risk to national security." Defense Secretary Pete Hegseth stated that no contractor, supplier, or partner doing business with the U.S. military could conduct commercial activity with Anthropic, significantly impacting the AI startup's business.
OpenAI announced an agreement with the Department of Defense (DoD) to deploy its AI models on classified networks. CEO Sam Altman highlighted that the DoD demonstrated respect for safety and agreed to OpenAI's core principles, including prohibitions on domestic mass surveillance and requirements for human responsibility in the use of force.
via jpost.com
OpenAI CEO Sam Altman stated that the government and Pentagon require AI models and partners, acknowledging Anthropic's position while pursuing his own deal. He expressed that while he doesn't believe the Pentagon should threaten companies with the Defense Production Act, it's essential for AI companies to collaborate with the government and military, provided legal protections are met.
President Donald Trump ordered all federal agencies to immediately halt the use of Anthropic's AI technology, citing concerns over its deployment in military operations. This directive came amid a standoff where Anthropic refused to grant the Pentagon unrestricted rights to use its Claude models, insisting on limitations against mass surveillance and autonomous weapons.
via pbs.org
The Department of Defense (DoD) began increasing pressure on Anthropic, with Defense Secretary Pete Hegseth reportedly asking defense contracting giants about their use of Claude due to a "potential supply chain risk declaration." This action followed months of contentious negotiations over the terms of Claude's military use.
2025
1 update
2025
1 updateThe U.S. General Services Administration (GSA) announced a OneGov agreement with Anthropic to provide Claude for Enterprise and Claude for Government to all three branches of the U.S. government for a nominal fee of $1. This initiative aimed to position the United States as a leader in government AI adoption and support the White House's America's AI Action Plan.
via gsa.gov
2026
Story began · 15 days ago