"All Lawful Purposes"
Mythos, the Department of War, and the three words that I suspect will be the source of a lot of trouble going forward.
The NSA is using a model. Anthropic, the company that makes the model, is in two federal lawsuits with the Department of War. NSA sits inside the Department of War.
If that doesn’t sound right, congratulations, you’re paying attention. (Yes, it’s still weird using “War” instead of “Defense”, but... here we are)
Calling Something Dangerous While Using It
The War Department designated Anthropic a “supply chain risk” in March, the first time the federal government has done that to an American company. (The designation was built for foreign supply-chain compromise, Huawei and Kaspersky, not for U.S. vendors saying no to a clause.) On May 1 the Pentagon announced AI procurement deals with seven vendors for use in classified networks (OpenAI, Google, Microsoft, AWS, Nvidia, SpaceX, Reflection) and pointedly left Anthropic off the list.
The same week, the Pentagon’s CTO, Emil Michael, went on CNBC and said two things in one breath. Anthropic is still a supply chain risk the agency doesn’t want in its systems, and that Mythos, Anthropic’s shiny new cyber-defense model, is “a separate national security moment” because “that model has capabilities that are particular to finding cyber vulnerabilities and patching them.” That model, by the way, is the one that autonomously found a 17-year-old FreeBSD bug plus a few thousand zero-days, which means the NSA has been using it in production for the last couple weeks (also inside the War Department) while the War Department’s own lawyers argue in federal court that nobody should be using anything from Anthropic at all.
The government is publicly declaring Anthropic a national-security threat. The government is privately running Anthropic’s most capable model in NSA cybersecurity operations. The same agency. The same week.
A federal judge thinks the contradiction looks like retaliation. Judge Rita F. Lin in the Northern District of California granted Anthropic a preliminary injunction in March, finding that the supply-chain-risk designation looked like punishment for Anthropic’s public AI-safety advocacy and was likely to fail on First Amendment grounds. A separate designation under a different statute survived a stay request at the DC Circuit, so Anthropic is currently blocked from War Department contracts but can keep working with other federal agencies while the litigation plays out. Oral arguments are May 19.
It’s a... I believe the legal term is “complete cluskerfuck”.
WTF Does “Lawful” Even Mean?
The fight that started all of this is about a mostly about a single clause. Three words: “all lawful purposes.” Standard language on every Pentagon vendor contract. Vendors agree to permit any use of their product that doesn’t violate U.S. law. Gets signed thousands of times a year by every defense contractor in the country. Nobody fights about it, because it makes sense. Kinda. Unless you think about it for a moment.
Anthropic fought about it.
On its face the clause sounds like a constraint. The government gets to use the product, but only for things that are legal. That looks like a limit, but in this case, it’s really not. It’s a blank check that calls itself a limit.
“Lawful” is whatever the government decides is lawful. Under at least some readings of U.S. law, mass domestic surveillance of citizens is lawful. Under at least some readings, fully autonomous weapons systems are lawful. Under at least some readings, basically anything the executive branch wants to do is lawful, until a court says otherwise, and the court is interpreting the same legal framework the executive operates under. “All lawful purposes” doesn’t mean “purposes that are legal in some objective sense.” It means “purposes the government has decided are legal.” The constraint is being defined by the same party invoking it.
Secretary of War Pete Hegseth said the quiet part loud during his AI strategy memorandum, quoted in Anthropic’s court filings: “The Department of War must have full, unrestricted access to Anthropic’s models for every LAWFUL purpose in defense of the Republic.” Read that with the emphasis Hegseth chose. Full, unrestricted access, every LAWFUL purpose, and the government deciding what’s lawful.
Anthropic could see that. They have publicly committed not to enable mass domestic surveillance or fully autonomous weapons, and “all lawful” is the clause that would have authorized either of those, on terms set by the buyer.
The War Department’s response was to designate Anthropic a supply chain risk, the way they would treat a foreign-state-influenced company with hidden allegiances. That designation cuts off Anthropic from federal procurement broadly, not just at the War Department. It’s the maximum penalty the executive branch has for a vendor that says no.
Mythos Doesn’t Care About Politics
While all of this has been going on, Mythos has been finding bugs. The model has continued doing what it does, through the supply-chain-risk designation, through the seven-vendor announcement, through the federal lawsuits, through the press cycle. NSA has been running it in production. Project Glasswing’s forty-something companies have been deploying it. The model hasn’t been waiting for the procurement question to resolve.
Which means the procurement fight is not really about access. The government already has access. The government has been using the model the whole time the lawyers have been arguing it shouldn’t have access. The fight is about whether Anthropic will formally authorize the access the government is already exercising, on the terms the buyer wants.
Anthropic said no. The government called Anthropic a national-security threat for saying no. The government also kept using the model, because the model is useful, and “all lawful” was always less about constraining what the government does with the model and more about getting the company to put its name on whatever the government decides to do with it.
That’s the structure. A procurement designation that says the company is dangerous. An on-record acknowledgment from the same agency that the company’s product is essential. A Secretary of War who says the quiet part loud about “every LAWFUL purpose.” A federal judge who looked at the package and called it First Amendment retaliation. A vendor that refused to sign. A deployment that kept happening anyway.
Three words started the fight. Forty-ish deployments are happening anyway, with more likely coming very soon.
Not to mention, there are some people already saying the new Chatgpt model is at least close to being on par with Mythos in exploit hunting. The tech doesn’t care about the politics... but we should, because the politics is what will end up deciding who has access to the most effective, useful, and... dangerous... tools. And I for one, am warry of trusting those choices to politicians who need help resetting their email passwords.




Yeah I think you are correct. My presumption is that all this will move far too fast for society to develop anything like a rational response. We will end up trying to build something new on the other side, amongst the wreckage, this is of course assuming there is another side of this and it doesn't just accelerate endlessly.
Indeed. My guess - those with the least ethical restraint wins by unleashing the most bots on upcoming elections. And if you think things are bad now...