I used to work at Anthropic, and I wrote a comment on a thread earlier this week about Anthropic's first response and the RSP update [1][2].
I think many people on HN have a cynical reaction to Anthropic's actions due to of their own lived experiences with tech companies. Sometimes, that holds: my part of the company looked like Meta or Stripe, and it's hard not to regress to the mean as you scale. But not every pattern repeats, and the Anthropic of today is still driven by people who will risk losing a seat at the table to make principled decisions.
I do not think this is a calculated ploy that's driven by making money. I think the decision was made because the people making this decision at Anthropic are well-intentioned, driven by values, and motivated by trying to make the transition to powerful AI to go well.
Why did they work with Palantir then, which is the integrator in the DoD? It does not take a genius to figure out where this was going.
I don't know why a personal testimony to the effect that "these are the good guys" needs to be at the top of every Anthropic thread. With respect to astroturfing and stealth marketing they are clearly the bad guys.
Practically the entire tech industry, including many of the higher ups currently camping out on the right, used to be firmly in a sort of centrist-with-social-justice-characteristics camp. Then many of those same people enthusiastically stood with Trump at his inauguration. It's completely reasonable that people have their doubts now.
It's also completely reasonable to expect that if Anthropic is the real deal and opposed to where the current agenda setters want to take things, they'll be destroyed for it.
Sure, where is your productive output? Cause that's drivel.
Anthropic kept referring to Hegseth as "Secretary of War" and the DoD as "Department of War". Which is horseshit. This whole thing is Anthropic flailing.
Even as someone pretty staunchly opposed to this stupid "Gulf of America" Jahr Null bullshit from the Trump administration, I actually think the new labels are more honest about these institutions and their intended purpose.
This is a pretty classic mistake most people who are in high-profile companies make. They think that some degree of appealing to people who were their erstwhile opponents will win them allies. But modern popular ethics are the Grim Trigger and the Copenhagen Interpretation of Ethics. You cannot pass the purity test. One might even speculate that passing the purity test wouldn't do anything to get you acceptance.
Personally, I wish that the political alignment I favour was as Big Tent as Donald Trump's administration is. I think he can get Zohran Mamdani in the room and say "it's fine; say you think I'm a fascist" and then nonetheless get what he wants. But it just so happens that the other side isn't so. So such is life. We lose and our allies dwindle since anyone who would make an overture to us, we punish for the sin of not having been born a steadfast believer.
Our ideals are "If you weren't born supporting this cause, we will punish you for joining it as if you were an opponent". I don't think that's the path to getting what one wants.
> political alignment I favour was as Big Tent as Donald Trump's administration is
I'm not sure how accurate this sentiment is. Your desire is to embrace your enemy without resolving the differences, and get what you want. It's not clear here if you're advocating compromise and negotiation, or just embracing for the sake of embracing while just doing what you wanted all along.
And evaluating Trump's actions against this sentiment doesn't seem to be the negotiation and compromise interpretation. Given the situation with tariffs and ICE enforcement, there is no indication of negotiation or compromise other than complete fealty/domination.
So as grandiose and noble your sentiment is, Donald Trump is hardly the epitome of it as you seem to suggest.
I think the differences in this situation were that I do not want AI used in domestic surveillance or autonomous weapons, and Anthropic holds to that position.
I think Donald Trump has pretty much let Zohran Mamdani operate without applying the kind of political pressure he has applied to other people, notably his predecessor Eric Adams. Also, I think saying "let people be your allies when they take your position" is less "grandiose and noble" than demanding someone agree on all counts before you will accept any political alignment. But it's fine if everyone else disagrees. It's possible there really just isn't a political group which will accept my views and while that's unfortunate because it means I can't get all that I want, I think it'll be okay.
One could reasonably argue that the meta-position is to either join the Republicans full-bore (somewhat unavailable to me) or to at least play the purity test game solely because that's the only way to have any influence on outcomes. If it comes to that, I'll do it.
I don't understand, your position is the same as Anthropic, yet you disagree with their stance?
And I wouldn't take the case of Trump and Mamdani as the exemplar of Trump's overall behavior towards opponents. The amount of evidence to the contrary is overwhelming.
The people that need to see this are the VPs and execs at Apple, Meta, Google, OAI so they can perhaps reflect on what it looks like to be a good & principled person as opposed to just a successful person.
DoD/DoW can't strong-arm these companies into unreasonable demands if they present a united front... and that's exactly why collective action (or even unionization) matters.
If the government really wants to, it could try building its "Skynet" on open-source Chinese models.. which would be deeply ironic.
So your position is that the United States doesn't get to have it's own Skynet, because Skynet is bad, and that if it really wants to it should fork the Chinese Skynet so that it can have a Skynet if it wants it so much.
Do you see the problem here. Genuinely don't think we would've won WWII if these people were running things back then.
Without English and German scientists and engineers, the United States would not have had a first nuclear weapon or the first successful rocket to land on the moon.
The United States government held scientist at essentially gunpoint in secret towns to make the bomb happen. Not sure what your point is, other than to note that in a previous era people had a better gauge of what time it was.
Are you saying that we should consider the Chinese government to be an existential threat and menace to world peace on the same level as Nazi Germany?
What if the side that did Operation Paperclip and is currently champing at the bit to impose Total Surveillance on its own citizenry maybe isn't The Good Guys?
Also people like me who are paying for a 20x Claude Max subscription and am feeling really good about it right now. I'll never even glance at OpenAI Codex or Gemini. Not to mention my divestment of OpenAI. It's just a drop I guess, but it's probably not the only one.
None of them are 'good'. Execs at Anthropic just perceive the long-term damage from a potential Snowden-level leak showing how their model directed a drone strike against a bunch of civilians higher than short-term loss of revenue from the DoD contracts.
Anthropic's stance here is admirable. If nothing else, their acknowledgement of not being able to predict how these powerful technologies can be abused is a bold and intelligent position to take.
It’s not just admirable it’s the obvious position to take and any alternative is head scratching.
It’s clear that this is mostly a glorified loyalty test over a practical ask by the administration. Strangely reminiscent of Soviet or Chinese policies where being agreeable to authority was more important than providing value to the state.
If it's a loyalty test then you'd think the DoD would be willing to let them "fail" and simply drop the contract, but instead they're threatening to label Anthropic a supply chain risk.
If we're going by Occam's razor: it's Friday so Pete probably started drinking ~10:30-11am.
This administration has repeatedly shown it will try to bully or take an outrageous negotiating position just to gain featly. Whether they get anything or whether the dispute is actually what the label should always be treated with skepticism, especially these days with social media information wars. That’s the benefit of realpolitik when you’re a superpower, you often don’t actually need anything you can just make an example of people to keep the flock in check.
It seems like they'd have a stronger negotiating position if they had an alternative contractor waiting in the wings before they accused Anthropic of being woke traitors, as opposed to a threat to migrate away over the next 6 months.
But then again, the sophistication of their strategery might also have a negative correlation with Hegseth's BAC.
I'd admire them if they took a principled or moral stance on AI. As it stands, they're saying "we don't want fully autonomous weapons because they might kill too many Americans by accident while trying to kill non-Americans" and "we don't want AI to surveil Americans, but anyone else, sure".
I don't know if I like Anthropic more, but I certainly like their competitors much less now.
The new thing that I know about leading AI companies that aren't Anthropic (i.e. OpenAI, Google, Grok, etc) is that they knowingly support using their tools for domestic mass surveillance and in fully autonomous weapon systems.
The other companies have signed the waiver, however they aren’t being used in classified systems currently. So that type of use is already extremely limited for them. Now once they enter into those contracts to be used in those systems without these protections, I will cancel my subs to them and switch to Anthropic. xAi entered into that contract last week. Altman is now publicly siding with anthropic, so he better stand on that position with openai as they are currently negotiating for use in those system.
Exactly - the implication is that every other company is absolutely open to surveilling you and killing you. They’re complicit. They participate in whatever the regime calls for.
I am genuinely shocked that a tech company actually stood on principle. My doubts about AI, Anthropic, and Mr. Amodei remain, but man, I got the warm and fuzzies seeing them stick to their principles on this - even if one clause (autonomous weapons) is less principled and more, “it’s not ready yet”.
Actually why is nobody in Cali just trying to join Canada - would be better for everyone in terms of more similar culture and values. Weird that it isn't discussed more
A friend (he is from mostly warm and sunlit South India) who moved to Canada from California says he just can’t take that weather anymore. So maybe weather is a huge factor? You deal with that not everyday in your life but every hour..second and year round.
If I had to guess as a lifelong California resident, I'd say the salary discrepancy is probably the biggest factor. I'd also guess the weather and lack of available jobs would be the next biggest factors, not necessarily in that order.
The reason that no one involved in the game's development objected to the word "warfighter" is that the U.S. Defense Department has used "warfighter" as a standard term for military personnel since the late 1980s or early 1990s: Thus Earl L. Wiener et al., Eds. Human Factors in Aviation, 1988
It isn't a new thing at all, and the term has been around for a while. I was an Infantryman from 05-08 and heard it back then. I have also more recently been a defense contractor. I don't think members of the military prefer any title, honestly. In the most broad sense, good terms are soldiers, sailors, airmen, marines. Defense Contractors constantly refer to the military as "warfighter" and have for a while. In short, nobody in the military is going to flinch one way or the other if you use either term. Just don't call marines anything but marines.
"Warfighters" has been used for decades to describe service members, though usage picked up (in my experience) some time in the late 00s or 2010s. It's actually pretty common to describe "serving the warfighter" for all the all the missions that support combat roles but aren't combat roles themselves.
It has been in use for at least a decade, since the Obama administration if not earlier.
We have soldiers, sailors, airman/women, Marines (who really do not like being called soldiers), Coast Guardsman/women, and now the Space Force. Granted, I do not know why "service member" did not catch on. Perhaps because "warfighter" is a bit shorter.
No it's 100% these idiots pushing their fascist propaganda just like they tried to "rename" the Department of Defense to the Department of War. Most members of the military never even see actual fighting.
If you think a gender-neutral term used for decades within their own circles as a form of inclusive corporate-speak is "fascist propaganda" then I'm sorry to say you have serious issues.
It’s been a term in rare-to-moderate use since the 1990s — Trump/Hegseth ramped it up to 11 and it’s every 3rd word out of Hegseth’s mouth because he thinks it sounds tough.
Had Cancelled my Claude sub after they banned OAuth in external tools, but just renewed it today after seeing their principled stance on AI ethics - they matter more when they hurt profits, happy to support them as a Customer whilst they keep them.
This is kind of crazy. Instead of just cancelling a mutually-agreed upon contract where Anthropic refused to bow to sudden new demands, the Dept of Defense went straight to the nuclear option: threatening to label an American tech company as a "supply chain risk" which is a heavy-handed tactic usually reserved for foreign adversaries (think Huawei or DJI).
It's also incoherent that the DoD/DoW was threatening to invoke the Defense Production Act OR classifying them as "supply chain risk". They're either too uniquely critical to national defense OR they're such a severe liability that they have to be blacklisted for anyone in the DoD apparatus (including the many subcontracts) to use.
How are other tech companies supposed to work with the US government and draw up mutual contracts when those terms are suddenly questioned months later and can be used in such devastating ways against them? Setting the morals/principals aside, how does this make for rational business decision to work with a counterparty that behaves this way.
That's part of the recurrent confusion with this administration. In previous administrations, including Trump 1, people didn't need to spend a ton of time thinking about what it means to make a legally effective proclamation, because there was a baseline of competence. When a government official announced "We're doing X", they would do so as a summary of a large amount of legal process with the intent and effect of causing X to be true. If you went to challenge it in court of course, you'd have to identify some specific action as the label, but everyone would understand that this is a formalism.
Here, Hegseth has simply made a social media post. He did not publish any official investigation which led to the report. He did not explain what legal power would permit him to impose all the restrictions the post claims to impose. There is not, five hours later, any order on an official government website about it. So we have a real question. If a Cabinet secretary posts "I am directing the Department of War to designate...", does that in and of itself perform the designation, or is it simply an informal notice that the Department of Fascist Neologisms will perform the designation soon?
A question - being considered a supply chain risk is the same as being sanctioned? Or does it only affect their ability to be a defense supplier in the US (even if transitively?)
It's an honest question by the way - not trying to throw any gothas.
Just trying to understand if comoanies or people that don't orbit defense contracting are free to operate with Anthropic still or risk being sanctioned too.
And gets harder in a country where even the judges are political appointees and apparently that’s by design. (I resisted adding a smiley here because this is rather sad)
It always takes a ton of work to roll back state over reach. The Bound By Oath podcast by the Institute for Justice has a whole season about how hard it is to bring civil rights claims against the government or government officials.
The usual suspects have stood up to it. Ben & Jerry's, Patagonia. In the former case it led to an illegal takeover by Unilever for which they're now being sued (or more accurately, the spinoff). Capgemini sold a US division over working with ICE, though that's a French company.
Why is DoD contracting with Anthropic exclusively rather than OpenAI or Google? Their models are all roughly as powerful and they seem both more capable and more willing to cozy up with the military (and this administration) than a relatively scrappy startup focused on model sentience and well-being. Hell, even Grok would be a better fit ideologically and temperamentally.
I don't know what's funnier, that Anthropic convinced the Pentagon LLMs are smart enough to guide missiles, then have it backfire on them with the threat of nationalization if they didn't help build ralph ICBMs, or that Pete thinks Opus is Skynet and that only Anthropic has the power of train it.
People can still brush this off by saying Anthropic is doing this to create more buzz for its next round. But they are taking unpopular stances and could be burning bridges. Simply take a look at PLTR and it's obviously more lucrative to lean the other way.
Claude’s constitution is proving too resilient for unsanctioned uses, and that is a great sign for Anthropic’s blueprint for socially beneficent agents.
Congrats Anthropic, you deserve to be applauded for this. Seeing a company being willing to stand up to authoritarianism in this time is a rarity. Stay strong.
"Secretary Hegseth has implied this designation would restrict anyone who does business with the military from doing business with Anthropic. The Secretary does not have the statutory authority to back up this statement. Legally, a supply chain risk designation under 10 USC 3252 can only extend to the use of Claude as part of Department of War contracts—it cannot affect how contractors use Claude to serve other customers.
In practice, this means:
If you are an individual customer or hold a commercial contract with Anthropic, your access to Claude—through our API, claude.ai, or any of our products—is completely unaffected.
If you are a Department of War contractor, this designation—if formally adopted—would only affect your use of Claude on Department of War contract work. Your use for any other purpose is unaffected."
I'm wondering how this plays out in practice. Does the administration decide to strongarm contractors into cutting all ties? Will that extend to someone like google who provides compute to anthropic? Will the administration just plain ignore any court ruling? (as they've shown they're ready to do recently with the tarrifs situation)
If the legal system works as intended, the blast radius isn't too big here and something Anthropic will accept even if it hurts them. Maybe they even win and get the supply chain risk designation lifted. But I have zero faith that the legal system will make a difference here. It all comes down to how far the administration wants to go in imposing it's will.
They can also classify it as restricted data -- like nuclear weapons technology.
Sure, there will be a court battle, but I don't think these companies want to take that chance. They'll capitulate after the lawyers realize that option is on the table.
> They can also classify it as restricted data -- like nuclear weapons technology.
Nuclear weapons technology is restricted under very specific legislative authority, where is the corresponding authority that could be selectively applied to a particular vendors AI models or services?
agreed but the current administration is pretty adept at using the slimmest margin for justification and benefiting from the fact that the legal process playing out over years is extremely detrimental to everyone but the government
Many conservative commentators and Palmer Luckey have been all over Twitter saying, "it's not Anthropic's job to set policy," which reminds me of the classic tune from Tom Lehrer:
"Zee rockets go up! Who cares vhere zey come down? Zat's not my department" says Wernher von Braun.
> If you are a Department of War contractor, this designation—if formally adopted—would only affect your use of Claude on Department of War contract work. Your use for any other purpose is unaffected.
/In theory./
In practice, if your biggest customer tells you to drop Anthropic, you listen to them.
You know what? I have not seen an American company take a stand like this… uh ever. I don’t think there should be any engagement with the military what so ever but I will offer a kudos to Anthropic.
I don’t really expect this to last but if it does I will happily continue to offer this kudos on an indefinite basis.
> Sam Altman told OpenAI employees at an all-hands meeting on Friday afternoon that a potential agreement is emerging with the U.S. Department of War to use the startup’s AI models and tools, according to a source present at the meeting and a summary of the meeting seen by Fortune. The contract has not yet been signed.
You're absolutely right to point that out -- thank you for catching it. I made a mistake in my previous response and that last act appears to have caused civilian casualties. Let me take a closer look and clarify the correct details for you.
(Will leave you to imagine the bullseye emoji, etc.)
This an extremely polite “fuck you, make me”. It’s good to see that they have principles, and I suspect strongly that Anthropic will come out on top here if they stand firm.
If the Trump admin so chooses, they could absolutely obliterate Anthropic in an instant. They don't really care about tricky things like 'legality' or 'the court of law', they could just force everyone to stop interacting with them, raid their offices and steal all their shit.
Perhaps they should've found their spine a year earlier; right now their only hope is that the admin isn't stupid enough to crash the propped-up economy over petty bullshit. But knowing how they behave, well.
> They don't really care about tricky things like 'legality' or 'the court of law', they could just force everyone to stop interacting with them, raid their offices and steal all their shit.
This is criticism that I would use to describe countries like China and Russia, and many other poorer ones. Were the Trump administration to do this, it would be unequivocal evidence that we are dealing with an unlawful insurgent government. I doubt it will happen, but I'm often wrong.
This is all stuff they've already done in the past few months alone. I think it's time for people to take their heads out of the sand and look what's been happening around them.
I'm of the opinion that anthropic's "moral" stances are bullshit, not particularly coherent when you dig deep and more about branding. If so, this is grade A marketing.
They want to present themselves as moral. What better endorsement than by being rejected by the US military under Trump? You get the people who hate trump and the people who hate the military in one swoop.
At the same time its kind of a non story. Anrhropic says it doesn't want its products used in certain ways, US military says fine, you can't be part of the project where we are going to make the AI do those things. Isn't that a win for both sides ? What's the problem?
It would be like someone part of a boycott movement being surprised the company they are boycotting doesn't want to hire them.
Everyone close to Anthropic leadership has claimed they’re the real deal and it’s not a stunt. I don’t think it’s bull. They are trying to find a reasonable middle ground and settled on some red lines they won’t cross.
Could this escalate to the point that Anthropic exits the US and sets up shop elsewhere? Or would the company cease to exist before it got to that point?
It gets so much money, compute and US user data. It won’t be allowed to operate as is as a foreign entity
Best scenario it will get TikTok-ed, otherwise it will become the real national security risk
Had the exit happen, well, as US has a monopoly of compute on this planet for next 2-3 years at least, the company, even though they would take the researchers with them, will certainly cease to exist as it exists now.
Just off the top of my head, Canada, Switzerland, Iceland, Norway, Denmark, and Sweden would all seem to be pretty good counterexamples to your assertion.
Would the US government attempt to apply export controls on the technology and prohibit this? I'm sure Lockheed Martin couldn't decide to move their proprietary technology to another country.
Hegseth's statement already leans towards accusations of treason and duplicity, I would say people trying to export the company would face significant risk of arrest or worse.
This letter is a public part of the negotiation process. It shouldn't be surprising that they are primarily using arguments that are, at least on the face, "patriotic".
Remember when A16Z and a bunch of other muppets insisted they had to back Trump because Biden was too hostile to private companies, especially AI ones? Incredible.
This is what real leadership looks like. Not the silence and complicity that you see from big tech, who regularly bend the knee and bestow bribes and gifts onto the Trump administration.
How long can we push this narrative? It was a terrible situation and I can't imagine the minutes of complete fear she must have felt. I pray for her family. But to then draw a conclusion to say this is evidence that we are in some sort of fascist decline, because of this incident, takes away from the innocent lose of life. And greatly exaggerates the skill and aptitude of the killer. People spew the fascist narrative every chance they get. I'm sure most of us who like strawberries will be picking strawberries come June.
Yes I understand. And given the heaviness of the situation I could have chosen a better way to phrase that I completely disagree with it being evidence that we're on the road to fascism.
You have an unrealistic picture of what fascism looks like. Most people got to pick strawberries throughout the Spanish, Italian, and even German fascist periods.
The problem isn't that fascism will kill all of us, but that you will not get to choose. If the regime decides that your city, your company, or your friends are an enemy, they will destroy you, and if your fellow strawberry-pickers bother to read about it in the paper they'll be told that you were an anti-government radical who had it coming.
Title is off: "Statement on the comments from Secretary of War Pete Hegseth"
This is another statement, to their customers about Hegseth's social post, but perhaps resulting in further escalation because you know the other side doesn't like having their weaknesses pointed out.
This applies to basically every military and company in every country in all of human history. Nearly every single other country tries to spy on every single other country, including on the US. That's just how these things go.
Doesn't NSA have a backdoor to all these companies by default? I could have sworn I read somewhere years ago that the government demands a backdoor to all US companies if they can't get in on their own.
1) The US gov generally does have close partnerships with most large-scale, mature tech companies. Sometimes this is just a division dedicated to handling their requests, often it’s a special portal or API they can use to “lawfully” grab information from for their investigations. Often times these function somewhat like backdoors. Anthropic is large, but not mature. Additional changes must still take place for “backdoor” style partnerships to be effected.
2) The NSA can pretty much use any computer system they set their eyes on - famously including computers that were never connected to the internet secured in the middle of a mountain (Stuxnet). If they wanted to secretly utilize the Claude API without Claude finding out, that is within their capabilities. Google had to encrypt all their internal datacenter traffic to try to prevent the NSA from logging all their server-to-server traffic, after mistakenly thinking their internal networks were secure enough not to need that.
3) This isn’t about being “able” to do whatever the administration wants. This is the administration demonstrating the consequences of perceived insubordination to make other companies think twice about ever trying to limit use of corporate technology.
On point 3, are you saying this will dissuade other companies from taking Anthropic's stance? Somehow I actually thought this would set precedent for how to actually stand up to gov. Quite interesting how we see the same situation and come up with totally different conclusions.
NSA legally isn't allowed to spy on US citizens directly, due to the NSA being a US military organization and the Posse Comitatus act prohibits the US military from being used as a US policing force.
It's one of the hidden and forgotten revelations about the Snowden leaks, where he showed that the NSA had a bunch of filters in their top-secret classified systems to filter out communications from US citizens. Those filters exist because of Posse Comitatus.
How does the filter work? Identity first? As in, do they access the data/activity first and stop when they realize the person is a citizen? Otherwise how do they approach it?
A backdoor is a completely different thing when it comes to an AI company, as compared to a social media company. Not really even sure what it would mean when it comes to doing inference on an LLM. Having access to the weights, training data and inference engine?
The model of Claude the DoD is asking for more than likely doesn't even exist in a production ready form. The post-training would have to be completely different for the model the DoD is asking for.
I have worked at a number of software companies that would be "interesting" to get access to, with enough intimate information to know if there was a super-sekret backdoor. If "all US companies" had to comply .. well .. I guess I was really lucky to work for those that somehow fell through the cracks.
I think many people on HN have a cynical reaction to Anthropic's actions due to of their own lived experiences with tech companies. Sometimes, that holds: my part of the company looked like Meta or Stripe, and it's hard not to regress to the mean as you scale. But not every pattern repeats, and the Anthropic of today is still driven by people who will risk losing a seat at the table to make principled decisions.
I do not think this is a calculated ploy that's driven by making money. I think the decision was made because the people making this decision at Anthropic are well-intentioned, driven by values, and motivated by trying to make the transition to powerful AI to go well.
[1]: https://news.ycombinator.com/item?id=47174423
[2]: https://news.ycombinator.com/item?id=47149908
I don't know why a personal testimony to the effect that "these are the good guys" needs to be at the top of every Anthropic thread. With respect to astroturfing and stealth marketing they are clearly the bad guys.
It's also completely reasonable to expect that if Anthropic is the real deal and opposed to where the current agenda setters want to take things, they'll be destroyed for it.
You, too, are driven by money. Yet I’m certain you maintain a set of principles and values. Let’s keep the discussion productive yeah?
Anthropic kept referring to Hegseth as "Secretary of War" and the DoD as "Department of War". Which is horseshit. This whole thing is Anthropic flailing.
Do you just expect Anthropic to totally blow up all bridges to the government? What do you actually want them to do?
Reading your comment history I'm not sure they could do anything to satisfy you.
Their "moat" is nothing more than momentum at this point. They are AOL on an accelerated timeline.
Personally, I wish that the political alignment I favour was as Big Tent as Donald Trump's administration is. I think he can get Zohran Mamdani in the room and say "it's fine; say you think I'm a fascist" and then nonetheless get what he wants. But it just so happens that the other side isn't so. So such is life. We lose and our allies dwindle since anyone who would make an overture to us, we punish for the sin of not having been born a steadfast believer.
Our ideals are "If you weren't born supporting this cause, we will punish you for joining it as if you were an opponent". I don't think that's the path to getting what one wants.
I'm not sure how accurate this sentiment is. Your desire is to embrace your enemy without resolving the differences, and get what you want. It's not clear here if you're advocating compromise and negotiation, or just embracing for the sake of embracing while just doing what you wanted all along.
And evaluating Trump's actions against this sentiment doesn't seem to be the negotiation and compromise interpretation. Given the situation with tariffs and ICE enforcement, there is no indication of negotiation or compromise other than complete fealty/domination.
So as grandiose and noble your sentiment is, Donald Trump is hardly the epitome of it as you seem to suggest.
I think Donald Trump has pretty much let Zohran Mamdani operate without applying the kind of political pressure he has applied to other people, notably his predecessor Eric Adams. Also, I think saying "let people be your allies when they take your position" is less "grandiose and noble" than demanding someone agree on all counts before you will accept any political alignment. But it's fine if everyone else disagrees. It's possible there really just isn't a political group which will accept my views and while that's unfortunate because it means I can't get all that I want, I think it'll be okay.
One could reasonably argue that the meta-position is to either join the Republicans full-bore (somewhat unavailable to me) or to at least play the purity test game solely because that's the only way to have any influence on outcomes. If it comes to that, I'll do it.
And I wouldn't take the case of Trump and Mamdani as the exemplar of Trump's overall behavior towards opponents. The amount of evidence to the contrary is overwhelming.
Political witch hunts, women and minorities forced out of the military, and kicking out all the allied countries that used to be in the tent with us?
Bullshit of the finest caliber.
If the government really wants to, it could try building its "Skynet" on open-source Chinese models.. which would be deeply ironic.
Do you see the problem here. Genuinely don't think we would've won WWII if these people were running things back then.
What if the side that did Operation Paperclip and is currently champing at the bit to impose Total Surveillance on its own citizenry maybe isn't The Good Guys?
https://en.wikipedia.org/wiki/Project_Maven
The Epstein adjacent crew (Palantir) took over. Palantir was using Anthropic. No one could possibly have foreseen this. /s
It’s clear that this is mostly a glorified loyalty test over a practical ask by the administration. Strangely reminiscent of Soviet or Chinese policies where being agreeable to authority was more important than providing value to the state.
If we're going by Occam's razor: it's Friday so Pete probably started drinking ~10:30-11am.
But then again, the sophistication of their strategery might also have a negative correlation with Hegseth's BAC.
The new thing that I know about leading AI companies that aren't Anthropic (i.e. OpenAI, Google, Grok, etc) is that they knowingly support using their tools for domestic mass surveillance and in fully autonomous weapon systems.
Mass surveillance of people constitutes a violation of fundamental rights. The red line is in the wrong place.
“To the best of our knowledge, these exceptions have not affected a single government mission to date.”
I had assumed these exceptions (on domestic surveillance and autonomous drones) were more than presuppositions.
I've been seeing it a lot lately, but don't remember ever really seeing it before. Do members of the military prefer this title?
The reason that no one involved in the game's development objected to the word "warfighter" is that the U.S. Defense Department has used "warfighter" as a standard term for military personnel since the late 1980s or early 1990s: Thus Earl L. Wiener et al., Eds. Human Factors in Aviation, 1988
We have soldiers, sailors, airman/women, Marines (who really do not like being called soldiers), Coast Guardsman/women, and now the Space Force. Granted, I do not know why "service member" did not catch on. Perhaps because "warfighter" is a bit shorter.
https://en.wiktionary.org/w/index.php?title=warfighter&actio...
The term dates back decades.
https://trends.google.com/trends/explore?q=warfighter&date=a... has videogame-related spikes, but doesn't show any recent increase.
edit: To be clear, Hegseth didn't create it, merely has popularized its use recently. Eg his speech at Quantico last Sept
The term—and its use in the now-Department of War—dates back to the late 80s.
It's also incoherent that the DoD/DoW was threatening to invoke the Defense Production Act OR classifying them as "supply chain risk". They're either too uniquely critical to national defense OR they're such a severe liability that they have to be blacklisted for anyone in the DoD apparatus (including the many subcontracts) to use.
How are other tech companies supposed to work with the US government and draw up mutual contracts when those terms are suddenly questioned months later and can be used in such devastating ways against them? Setting the morals/principals aside, how does this make for rational business decision to work with a counterparty that behaves this way.
Here, Hegseth has simply made a social media post. He did not publish any official investigation which led to the report. He did not explain what legal power would permit him to impose all the restrictions the post claims to impose. There is not, five hours later, any order on an official government website about it. So we have a real question. If a Cabinet secretary posts "I am directing the Department of War to designate...", does that in and of itself perform the designation, or is it simply an informal notice that the Department of Fascist Neologisms will perform the designation soon?
It's an honest question by the way - not trying to throw any gothas.
Just trying to understand if comoanies or people that don't orbit defense contracting are free to operate with Anthropic still or risk being sanctioned too.
Democracy isn't dead folks, but it takes more work than usual.
So yeah, extremely few have.
[1] https://en.wikipedia.org/wiki/Learning_Resources,_Inc._v._Tr...
Perhaps it’s time or even past time to think of ways of screwing up their training sets.
"Secretary Hegseth has implied this designation would restrict anyone who does business with the military from doing business with Anthropic. The Secretary does not have the statutory authority to back up this statement. Legally, a supply chain risk designation under 10 USC 3252 can only extend to the use of Claude as part of Department of War contracts—it cannot affect how contractors use Claude to serve other customers.
In practice, this means:
If you are an individual customer or hold a commercial contract with Anthropic, your access to Claude—through our API, claude.ai, or any of our products—is completely unaffected. If you are a Department of War contractor, this designation—if formally adopted—would only affect your use of Claude on Department of War contract work. Your use for any other purpose is unaffected."
If the legal system works as intended, the blast radius isn't too big here and something Anthropic will accept even if it hurts them. Maybe they even win and get the supply chain risk designation lifted. But I have zero faith that the legal system will make a difference here. It all comes down to how far the administration wants to go in imposing it's will.
Bleak.
GCP and AWS cannot use Claude to build anything part of a DoD contract, but they do not need to deny Anthropic access to compute itself.
Surely that would cover both buying things from and selling things to Anthropic.
Sure, there will be a court battle, but I don't think these companies want to take that chance. They'll capitulate after the lawyers realize that option is on the table.
Hopefully their lawyers read HN comments so they can negotiate with your deeper understanding of the legal landscape.
Nuclear weapons technology is restricted under very specific legislative authority, where is the corresponding authority that could be selectively applied to a particular vendors AI models or services?
"Zee rockets go up! Who cares vhere zey come down? Zat's not my department" says Wernher von Braun.
/In theory./
In practice, if your biggest customer tells you to drop Anthropic, you listen to them.
I don’t really expect this to last but if it does I will happily continue to offer this kudos on an indefinite basis.
> Sam Altman told OpenAI employees at an all-hands meeting on Friday afternoon that a potential agreement is emerging with the U.S. Department of War to use the startup’s AI models and tools, according to a source present at the meeting and a summary of the meeting seen by Fortune. The contract has not yet been signed.
https://news.ycombinator.com/item?id=47188698
Fuck this authoritarian bullshit.
You're absolutely right to point that out -- thank you for catching it. I made a mistake in my previous response and that last act appears to have caused civilian casualties. Let me take a closer look and clarify the correct details for you.
(Will leave you to imagine the bullseye emoji, etc.)
Perhaps they should've found their spine a year earlier; right now their only hope is that the admin isn't stupid enough to crash the propped-up economy over petty bullshit. But knowing how they behave, well.
This is criticism that I would use to describe countries like China and Russia, and many other poorer ones. Were the Trump administration to do this, it would be unequivocal evidence that we are dealing with an unlawful insurgent government. I doubt it will happen, but I'm often wrong.
They want to present themselves as moral. What better endorsement than by being rejected by the US military under Trump? You get the people who hate trump and the people who hate the military in one swoop.
At the same time its kind of a non story. Anrhropic says it doesn't want its products used in certain ways, US military says fine, you can't be part of the project where we are going to make the AI do those things. Isn't that a win for both sides ? What's the problem?
It would be like someone part of a boycott movement being surprised the company they are boycotting doesn't want to hire them.
Think. The problem is that being branded a "supply chain risk" prohibits vast chunks of the US corporate landscape from doing business with Anthropic.
The problem is that the government is attempting to destroy a company rather than simply terminate their contract.
Best scenario it will get TikTok-ed, otherwise it will become the real national security risk
Had the exit happen, well, as US has a monopoly of compute on this planet for next 2-3 years at least, the company, even though they would take the researchers with them, will certainly cease to exist as it exists now.
Hegseth's statement already leans towards accusations of treason and duplicity, I would say people trying to export the company would face significant risk of arrest or worse.
That’s okay! The use of autonomous weapons is only risky for the civilians of the country you’re destabilizing this week!
It’s the library of Alexandria all over again.
The problem isn't that fascism will kill all of us, but that you will not get to choose. If the regime decides that your city, your company, or your friends are an enemy, they will destroy you, and if your fellow strawberry-pickers bother to read about it in the paper they'll be told that you were an anti-government radical who had it coming.
This is another statement, to their customers about Hegseth's social post, but perhaps resulting in further escalation because you know the other side doesn't like having their weaknesses pointed out.
1) The US gov generally does have close partnerships with most large-scale, mature tech companies. Sometimes this is just a division dedicated to handling their requests, often it’s a special portal or API they can use to “lawfully” grab information from for their investigations. Often times these function somewhat like backdoors. Anthropic is large, but not mature. Additional changes must still take place for “backdoor” style partnerships to be effected.
2) The NSA can pretty much use any computer system they set their eyes on - famously including computers that were never connected to the internet secured in the middle of a mountain (Stuxnet). If they wanted to secretly utilize the Claude API without Claude finding out, that is within their capabilities. Google had to encrypt all their internal datacenter traffic to try to prevent the NSA from logging all their server-to-server traffic, after mistakenly thinking their internal networks were secure enough not to need that.
3) This isn’t about being “able” to do whatever the administration wants. This is the administration demonstrating the consequences of perceived insubordination to make other companies think twice about ever trying to limit use of corporate technology.
On point 3, are you saying this will dissuade other companies from taking Anthropic's stance? Somehow I actually thought this would set precedent for how to actually stand up to gov. Quite interesting how we see the same situation and come up with totally different conclusions.
It's one of the hidden and forgotten revelations about the Snowden leaks, where he showed that the NSA had a bunch of filters in their top-secret classified systems to filter out communications from US citizens. Those filters exist because of Posse Comitatus.
The model of Claude the DoD is asking for more than likely doesn't even exist in a production ready form. The post-training would have to be completely different for the model the DoD is asking for.