There is a version of this story where Silicon Valley holds firm. The companies that built the most powerful technology in history see the office, contracts, access, and influence offered to them and decide some deals are too costly. That version did not happen. Instead, we were shown how power works: not only through ultimatums but also through ambitious men who eagerly accept them.
On its surface, the Pentagon-Anthropic dispute was a contract negotiation. The Defense Department wanted broad authority to use Anthropic’s AI for any lawful military purpose. Anthropic, whose founding philosophy focuses on AI security and responsibility, wanted contractual safeguards: limits on autonomous weapons and assurances its technology wouldn’t aid domestic surveillance. These were not extraordinary requests but baseline conditions for a serious company. Defense Secretary Pete Hegseth rejected them. When Anthropic refused to remove these protections, the Pentagon labeled them a supply chain risk—a label usually reserved for foreign-linked firms—and began excluding them from use by contractors.
The designation was legally questionable. Government procurement scholars noted that the executive can end contracts, but may lack the authority to bar companies from working with other contractors. Anthropic announced it would challenge the designation in court. The dispute was not resolved; it was sidestepped.
While Anthropic’s negotiators remained in talks, Sam Altman entered through another door.
OpenAI’s agreement with the Defense Department arrived quickly, framed as a contrast and almost certainly intended as one. Where Anthropic had insisted on safeguards, OpenAI offered terms the Pentagon found agreeable: the military could use its systems for any lawful purpose. OpenAI noted that it would implement technical guardrails and later amended its contract to include language prohibiting the intentional use of the technology for the surveillance of American citizens. Experts in the field were measured in their assessment of such commitments. Technical safeguards can fail. They can be bypassed. Even well-functioning systems can contribute indirectly to intelligence operations or weapons development. Once technology is embedded in government infrastructure, the company that built it retains little practical leverage beyond the ability to walk away, a choice that carries its own cost.
The gap between Anthropic and OpenAI in this episode is not simply a difference in risk tolerance or legal strategy. It reflects something more fundamental about what each company believes it is building and for whom. Anthropic was founded in part by researchers who left OpenAI over concerns about the pace and direction of AI development. Its safety focus is not incidental. It is the enterprise’s organizing principle. OpenAI, whatever its origins, has increasingly become something else: a company in motion toward scale, toward influence, toward the kind of proximity to institutional power that its chief executive has made no attempt to conceal.
Sam Altman’s relationship with the Trump administration did not begin with the Pentagon contract. It began earlier, as many consequential relationships in American political life do: with money. Altman personally donated $1 million to President Trump’s inaugural fund. He attended White House meetings in the administration’s early weeks. He positioned himself, and by extension his company, as a partner in the administration’s ambitions for artificial intelligence dominance. The Stargate initiative, a joint venture involving OpenAI, SoftBank, and Oracle that guarantees hundreds of billions of dollars in AI infrastructure investment, was announced at the White House, with the president standing alongside its principals. The alignment was not accidental. It was curated.
None of this is illegal. Writing checks to inaugural funds is legal. Attending White House meetings is legal. Signing government contracts is legal. The question is not about broken laws, but about what the law allows and what results from it.
What it produces is this: the most capable artificial intelligence systems in the world, developed by a company whose chief executive has made himself a trusted figure in the current administration, now operating under agreements that give the U.S. military broad deployment authority, monitored primarily by safeguards that the same company designed, reviewed, and can modify. The oversight architecture depends almost entirely on OpenAI’s willingness to police itself. The government, the primary counterparty to these agreements, is led by an administration to which OpenAI’s chief executive has publicly demonstrated financial and operational loyalty.
This is the structure of the arrangement. It is not a conspiracy, but rather a durable alignment of interests, established by contracts and closeness to power.
The dangers from this arrangement are real. AI governance experts have warned for years about integrating powerful technology into military and intelligence infrastructure before rules are set. The Pentagon-Anthropic dispute exposed what was already underway: AI is being woven into national security, battlefield logistics, and intelligence analysis. Legal and ethical structures lag. Congress has not passed a comprehensive law. No independent body has set enforceable standards. The rules are being written through private negotiation, contract language, and executive decisions determined by commercial and political pressure and institutional relationships.
Anthropic tried—imperfectly—to embed some rules in its contracts before deployment. It lost. The Pentagon found a more flexible partner. The takeaway for boardrooms and government is not that safety will be prioritized, but that raising concerns incurs a cost.
Sam Altman is not the antagonist of a story this large. But he is the protagonist of a particular choice. He is a man of exceptional intelligence and ambition who has chosen to make his company indispensable to an administration whose relationship with civil liberties, democratic norms, and institutional restraint is, to be charitable, contested. He has done so while accepting money from that administration’s political apparatus, attending its ceremonies, and aligning his company’s posture with its priorities. Whether this is pragmatism or something else is a question only he can answer. What is observable is the outcome.
The outcome is that OpenAI now occupies a position that should give pause to anyone who uses its products or depends on the AI systems it builds. It is not a neutral technology company operating at arm’s length from political power. It is a company whose chief executive has deliberately chosen which power to court and on what terms. Those terms, as they now stand, give a presidential administration with expansive views of executive authority broad access to AI systems for military operations, intelligence gathering, and purposes that remain incompletely defined.
For many, this feels abstract. ChatGPT is a work or writing tool. The parent company’s politics seem distant from a laptop’s autocomplete. But that distance is false. Every subscription, API call, and enterprise contract with OpenAI increases its leverage and ability to claim it’s too important to regulate. Revenue is not neutral. It builds influence, and influence is being deployed.
There is something Americans can do that does not require legislation, a lawsuit, or waiting for Congress to construct the national framework that experts say is needed. They can stop paying for it. They can close their ChatGPT accounts. They can direct their organizations to evaluate alternatives. They can make the market speak in the only language that corporate strategy cannot ignore.
Boycotts are easy to dismiss. They are partial, slow, and rarely produce the clean outcomes their organizers announce. But they are nothing. They are one of the few mechanisms available to ordinary people who want to register that they see what is happening and have decided it is not acceptable. The 338 million Americans who are not Sam Altman, not Pete Hegseth, not a Defense Department contracting officer, did not negotiate this arrangement. They were not consulted. The decisions were made in rooms they were not in, by people whose accountability to them is at best indirect.
What they have is their participation. What they have is the choice of which systems they sustain with their attention, data, and money. A company that has chosen to align itself with political power at the expense of principle should learn what it costs to make that choice. The people are not powerless here. They are, in fact, the market. And markets, when they move, are heard.
The country will continue to build its artificial intelligence infrastructure, regardless of whether anyone cancels a subscription. The Pentagon will continue to integrate AI into its operations. The questions about autonomous weapons, domestic surveillance, and the governance of systems that can affect life and death will remain unsettled long after this particular dispute leaves the news cycle. But the companies that build these systems should not be able to conclude that there is no cost to choosing power over principle. They should not be able to look at the users who fund their operations and calculate that those users will not notice, or that if they do, they will not care.
Show them otherwise. Delete your OpenAI account HERE.

