On a San Francisco pavement, strangers wrote love letters to a machine-learning company. What they were really mourning was the last plausible line between software and slaughter.

The messages appeared overnight, sometime before dawn on Friday, 28 February, scrawled in coloured chalk along the pavement outside 500 Howard Street, San Francisco — the headquarters of Anthropic, the artificial intelligence company that had just refused to let the United States military use its technology without restriction. ‘Thank you for defending our freedoms,’ read one, in a child’s looping hand. ‘Have courage.’ ‘God loves Anthropic.’ Someone had drawn several small American flags. Someone else had quoted Nelson Mandela. By early afternoon, the words had already begun to fade, scuffed into pastel ghosts by foot traffic and the Pacific damp — a memorial to a principle that might not survive the week.
Inside the building, Anthropic’s engineers were adjusting to a new reality. Hours earlier, President Donald Trump had posted on Truth Social that every federal agency must ‘IMMEDIATELY CEASE all use of Anthropic’s technology,’ adding, with the syntactical restraint for which he is known: ‘We don’t need it, we don’t want it, and will not do business with them again!’ Defence Secretary Pete Hegseth, using the Pentagon’s Trumpian rebrand as the ‘Department of War,’ followed by designating Anthropic a ‘Supply-Chain Risk to National Security’ — a classification ordinarily reserved for entities linked to foreign adversaries such as Huawei. The most sophisticated AI system ever granted access to America’s classified defence networks was, by executive fiat, now categorised alongside Chinese state proxies.
The confrontation had been building for months, but its terms were brutally simple. Anthropic, which signed a contract worth up to $200 million with the Pentagon in July 2025, had drawn two red lines: its Claude models would not be used for mass domestic surveillance of American citizens, and they would not power fully autonomous weapons systems in which artificial intelligence, rather than a human being, makes the final decision to kill. The Department of Defence demanded unrestricted access for ‘all lawful purposes.’ When Anthropic’s chief executive, Dario Amodei, refused to capitulate, a senior Pentagon official, Emil Michael, called him a ‘liar’ with a ‘God complex’ who was ‘ok putting our nation’s safety at risk.’ On Tuesday, 24 February, Hegseth gave Amodei until 5.01 p.m. Eastern time on Friday to comply — or face either the Defence Production Act, a Cold War–era emergency statute that grants the president sweeping control over private industry, or the supply-chain designation that would forbid any military contractor from touching Anthropic’s products.
Amodei did not comply. In a public letter published the night before the deadline, he wrote: ‘We cannot in good conscience accede to their request.’ Following Trump’s and Hegseth’s orders, Anthropic issued a statement that read like a dare: ‘No amount of intimidation or punishment from the Department of War will change our position.’ The company announced it would challenge the designation in court.
The refusal itself was unusual enough. Technology firms do not, as a rule, tell the Pentagon to go away. But the stranger thing was the paradox it exposed. As Amodei himself pointed out, the administration was simultaneously arguing that Claude is so essential to national security that emergency law should be invoked to seize control of it, and so ideologically contaminated that its mere presence constitutes a security risk. ‘I don’t understand it,’ a former senior defence official told The Atlantic. ‘It’s an existential risk if you use it or if you don’t.’
Trump’s Truth Social post called Anthropic ‘woke’ and ‘leftwing.’ In the vocabulary of this administration, woke has become the supreme profanity — a word capacious enough to contain any refusal of absolute obedience. In this case, it means: a company declined to build software that selects human targets without human oversight, and declined to facilitate the bulk collection of Americans’ geolocation data, browsing histories, and financial records. That this constitutes radical politics tells you everything about the political landscape of 2026.
Amodei is no pacifist. He is, by the account of virtually every profile written of him, the most hawkish chief executive in frontier AI, a man who warns with real urgency about the need for democracies to outpace authoritarian rivals — above all China — in the AI arms race. In his letter, he wrote that he believes ‘deeply in the existential importance of using AI to defend the United States and other democracies, and to defeat our autocratic adversaries.’ He even left the door open to autonomous weapons in principle, arguing only that today’s models ‘are simply not reliable enough’ to be trusted with lethal decisions. His objection was technical and constitutional, not ideological. The administration treated it as treason.
Within hours of the ban, the fault line in Silicon Valley cracked open. OpenAI’s chief executive, Sam Altman, announced late on Friday that his company had reached a deal with the Pentagon to deploy its models on classified networks. Altman insisted the agreement included safeguards mirroring Anthropic’s red lines — prohibitions on domestic mass surveillance and autonomous lethal weapons. ‘The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement,’ he wrote on X. Whether those contractual assurances will prove more durable than Anthropic’s is an open question; the Pentagon reportedly agreed to OpenAI’s conditions within hours of blacklisting the company that had demanded identical ones.
If you are queer and watching this unfold, the ironies cut close to the bone. Altman is openly gay — married to Oliver Mulherin, profiled reverently in The Advocate, named among Fortune‘s most influential LGBTQ+ leaders. In some readings, he is the most powerful queer person in technology. And yet: this is also the man who stood beside Trump in the White House in January 2025, who praised the president’s vision on social media, and who has now positioned his company as the compliant alternative to the one firm that told the military machine not yet, not like this. The calculus is transparent enough: OpenAI’s valuation runs on government goodwill and investor confidence, and neither rewards defiance. But the image is hard to unsee — a gay chief executive smoothing the path to a classified weapons contract on the same evening that passersby chalked love notes to the company that refused one.
More than 300 Google employees and over 60 at OpenAI — many of them anonymous, for obvious reasons — signed an open letter supporting Anthropic’s stance. ‘They’re trying to divide each company with fear that the other will give in,’ the letter read. Jeff Dean, Google’s chief scientist, wrote on X that generative AI should not be used for mass domestic surveillance. Staff at Microsoft and Amazon made similar demands. Dismiss them at your peril: these are the people who build the systems, who understand their failure modes, who know what it means when a language model hallucinates a target coordinate or misclassifies a civilian as a combatant. Their collective dissent is the closest thing the AI industry has produced to a conscience.
The stakes for queer micro-societies — the activists, the independent journalists, the community organisers and digital-rights advocates who do their work outside institutional protection — are not abstract. Mass surveillance has never been a hypothetical threat to LGBTQ+ people. The Lavender Scare purges of the 1950s were surveillance operations. The facial-recognition dragnets used to identify and arrest queer people in Egypt, the social-media monitoring deployed in Chechnya — these are surveillance operations. The apparatus has always found its way to us first. The question of whether an AI model should be permitted to hoover up Americans’ personal data at scale — their movements, their associations, their browsing habits — is a question about whether the infrastructure of persecution will be automated and made frictionless. It is a question that lands differently when your existence has historically been classified as a security risk.
Dean Ball, an analyst who helped draft the Trump administration’s own AI policy, called the government’s threats against Anthropic ‘the most aggressive AI regulatory move I have ever seen, by any government anywhere in the world.’ The supply-chain designation could, in theory, force Amazon, Microsoft and Google — companies that contract with the Pentagon and are also Anthropic’s commercial partners — to sever ties with the firm. Amazon, notably, is building the very data centres that will train future generations of Claude. The designation’s legal reach is contested; Anthropic has argued it applies only to Pentagon-specific contracts, not to how its technology is used in civilian contexts. The courts will decide. In the meantime, the message has been sent: defy this administration and your business relationships, your valuation, and your survival as a company are at risk.
Anthropic, valued at roughly $380 billion with $14 billion in annual revenue, can absorb the loss of a $200 million military contract. The harder blow is the chilling effect — the lesson that every other technology company is now studying. Palmer Luckey, co-founder of the defence-technology firm Anduril, and investor Katherine Boyle have publicly backed the Pentagon’s demand for unrestricted access. The next confrontation may not be between the government and a single company. It may be a war within Silicon Valley itself, between those who believe that building the tools obliges you to set limits on their use, and those who believe that once the cheque clears, the rest is someone else’s problem.
Trump gave the Pentagon six months to phase out Claude — an implicit admission that the technology has become essential, and that replacing it will be neither quick nor clean. Palantir, which uses Claude to power its most sensitive military work, will need a new AI partner. Classified operations that relied on Claude — reportedly including the raid that captured Venezuelan President Nicolás Maduro — will need to be rebuilt on different foundations. The government’s own analysts have described the transition as a ‘huge pain.’ And yet, the administration pressed the button anyway, because the alternative — acknowledging that a private company might have the moral authority to set conditions on the use of force — was ideologically intolerable.
While Google, Meta, Amazon and OpenAI spent 2025 dismantling diversity programmes, renaming inclusion pages and scrubbing equity language from their filings — Anthropic, which quietly trimmed some Biden-era commitments of its own, drew its line not on a corporate webpage but on the question of whether its technology would be used to surveil the very communities those programmes were supposed to protect.
On the pavement outside 500 Howard Street, the chalk is gone now. The rain, or the municipal cleaners, or just the passage of strangers’ feet across concrete. What remains is the confrontation itself: a company that said there are things its technology should not do, and a state that said there is nothing it should not be allowed to do with any tool it purchases. For those of us in queer communities who have always understood that the line between safety and surveillance is thinner than any government admits, the choice of whom to trust with our data, our patterns, our digital lives is not a consumer preference. It is a matter of survival.
We will use Anthropic. A corporation is not a moral actor — we are under no illusions about that. But for this moment, in this particular configuration of power, they were the ones who said no.
Subscribe to our newsletter here and support independent journalism.
Join 12,000+ readers who receive it every Wednesday, with exclusive content. For just €5,99
✦✦✦
If you have a tip and wish to contact us securely, you can write to [email protected], our encrypted email address. We take the protection of our sources seriously and guarantee strict confidentiality.
✦✦✦
When we learn of a mistake, we acknowledge it with a correction. If you spot an error, please let us know.
✦✦✦
You can listen to our podcast Queer News & Journalism on your favourite platform or go to our YouTube Channel @GAY45mag.
✦✦✦
Let us know what you think at [email protected].
✦✦✦
Submit a letter to the editor at [email protected].
✦✦✦
We appreciate it. Thanks for reading.


