There's always this comment, saying that its useless to possibly govern or resist advancement or development or use of weapons capable of indiscriminate killing.
If the world actually worked like they believe it does, if restraint were just not possible, the world would have been destroyed at least 3 documented times over.
There is only self regulation, ultimately, at the top. I think it's still progress to see these groups specifically call out their moral hesitations, even if it doesn't go anywhere - it gives people ground to realize that others share their concerns. All movements, all progress starts from people putting their stance out there and getting a conversation going around the topic; that builds mindshare and eventually a demand for change.
I understand the vision, but how does this work on a global scale. e.g. American employees refuse to build this, but China's don't.
Edit: I originally ended with "What would have happened if Germany had a nuclear bomb and America didn't?", but I think it distracted from the point I was trying to make so moving this to an edit. I'm not trying to ask "is the US the bad guy". I'm trying to ask how to balance personal anti war sentiments with the realities of the world (specifically in this case keeping up in an arms race).
>American employees refuse to build this, but China's don't.
How about you articulate the threat from an AI powered China to people outside of AI powered China and discuss potential methods to counter that, instead of insisting capabilities be developed just in case.
>is the US the bad guy
Yes
>I'm trying to ask how to balance personal anti war sentiments with the realities of the world
Insist on open information, never surrender consent willingly and demand justification for everything. As always.
This is not answering the question.. and HN ain't US only.
You can say the same for any other country... What if Japan employee refuse, but American want that anyway? What if China employee refuse, but Russia employee want that anyway?
The implication are still the same -- social, culture, jurisdiction, national interest, company interest don't share the same boundary and don't align on their priorities.
> American employees refuse to build this, but China's don't.
It's not American employees vs. China employees. No need to villainize China at every opportunity. Most Chinese employees are more similar to American employees than you think.
It's {top candidates who have their pick of employers} have the luxury to refuse to build this.
Mid-tier dude who can't land a job at any of the top AI companies and can code with Cursor and trying to pay their rent or medical bills will absolutely build AI for the military in return for having their rent paid.
This is regardless of whether it is in the US or China.
Is there any reason to think that autonomous weapons are a critical strategic capability? It's hard to see what an unpiloted drone can do that a remotely piloted drone can't, other than perhaps human rights violations.
Although in the context of the parent comment, majority of Googlers probably aren't working on things directly related to controversial topics, instead they are probably working on mundane and non-external facing projects like "how do I migrate my libraries from this deprecated dependency to this other shiny new thing".
Why is there any controversy about defending one's nation being "good" or "bad"?
I can not believe what I am reading here, and how the single comment supporting defending one's country is so heavily downvoted. Qatar has poisoned Western online communities such that all defence of the United States is considered taboo? I don't even live in the US and I am frightened by what I see here.
The controversy isn't about defending one's country, it's about you and the parent comment author assuming what this is all about without reading the article.
The core of the issue about autonomous use of AI in mass surveillance of Americans and autonomous use of AI in automated weapons that make kill decisions. Anthropic is perfectly fine with working with the War Department and "defending one's nation".
But they are not okay with their AI being used to make a mockery of the 4th amendment and making automated kill/no-kill decisions about actual human lives.
Oh I believe it’s important to defend the country, but not because it’s a popular opinion. I dislike any statement that believes truth is based on consensus.
The resistance goes out the window the first time an American is gunned down by an autonomous system. They should do whatever possible to prevent that outcome.
Am I the only one who remembers the prime directive of google, much easier to understand than 'organizing the worlds information' etc. etc. It was simpler.
This gets a giant eye roll from me. Are you really so naive that you thought working on AI for a giant tech company, creating software that is capable of finding deep patterns in massive amounts of data... and it wasn't going to used by the Defense / Intelligence industry? If you are so against the US government, and you are working for ANY big tech company you are aiding the Intelligence and Defense industry. Government uses AWS and Azure. Intelligence agencies use the data and tools of Meta / Google / Apple / etc.
Google employees must think this is pre 2024. The employer has the power and doesn’t mind laying off people who don’t tow the company line and all of the CEOs bend over and bribe the President - ie “settling” frivolous lawsuits brought by Trump himself over “censorship” when he was out of office
I think a lot of software companies are going to learn just how much employee power remains tomorrow, in the very likely event that the Pentagon issues an order purporting to ban all defense contractors from using Claude.
We already forgotten about this already? [0] Where was the open letter then?
Both companies (Google, OpenAI [0]) have defense contracts. At this point, the best course of action is to leave Google and OpenAI if you disagree with that (they won't).
Piggybacking on deeply integrated information and connectivity within society that was marketed and adopted under the guise of trust and an ethos of not being evil is pathetic.
Build, train, develop and maintain an AI for military if needed. When a government is scared of individuals they've clearly lost their edge.
It is when "defense" means invasion and subjugation of other countries. All countries pose their military operations as "defense." Inquiring minds should ask if a country surrounded on sides by two oceans with two pacified neighbors has any real threats or merely opportunities for cheap labor, market access, and mineral rights abroad.
This has been going on for a very long time (read what Smedley Butler said in "War is a Racket"), but after the Iraq War, the credibility of the US should be somewhere in hell.
I remember they successfully got Google out of a military contract in the first admin (and briefly vilified by the right for that). that's not going to work now. Workers have a lot less power and the CEO is buddies with Trump
As the article says, the workers didn't petition the CEO, they petitioned the head of Google AI who's already expressed solidarity with Anthropic. If they can convince Jeff Dean, I don't think Sundar necessarily gets a say; it's a lot easier to stick your head in the sand and ignore things than to fire one of your most widely respected engineers because he won't help the Pentagon build Terminator robots.
> it's a lot easier to stick your head in the sand and ignore things than to fire one of your most widely respected engineers because he won't help the Pentagon build Terminator robots.
Wouldn't it be more like he would leave on his own and the company would keep moving along? Why would they fire him?
I mean, right. Why would they fire him? The Pentagon isn't demanding some concrete technical action that Jeff Dean has to personally perform or could personally obstruct, so it wouldn't make any sense. That's why I don't think Google executives can realistically stop him from announcing a similar policy if he wants to.
My one concern in this whole thing is that if these slightly less benevolent, but still have some morality, companies don't engage, we'll be left with companies like OAI and xAI engaging and you just know that's not going to make things better for anyone.
If the world actually worked like they believe it does, if restraint were just not possible, the world would have been destroyed at least 3 documented times over.
Don't listen to them.
Edit: I originally ended with "What would have happened if Germany had a nuclear bomb and America didn't?", but I think it distracted from the point I was trying to make so moving this to an edit. I'm not trying to ask "is the US the bad guy". I'm trying to ask how to balance personal anti war sentiments with the realities of the world (specifically in this case keeping up in an arms race).
How about you articulate the threat from an AI powered China to people outside of AI powered China and discuss potential methods to counter that, instead of insisting capabilities be developed just in case.
>is the US the bad guy
Yes
>I'm trying to ask how to balance personal anti war sentiments with the realities of the world
Insist on open information, never surrender consent willingly and demand justification for everything. As always.
You can say the same for any other country... What if Japan employee refuse, but American want that anyway? What if China employee refuse, but Russia employee want that anyway?
The implication are still the same -- social, culture, jurisdiction, national interest, company interest don't share the same boundary and don't align on their priorities.
Also, Anthropic didn't actually refuse to work on all military stuff. They have some conditions, which isn't the same thing.
It's not American employees vs. China employees. No need to villainize China at every opportunity. Most Chinese employees are more similar to American employees than you think.
It's {top candidates who have their pick of employers} have the luxury to refuse to build this.
Mid-tier dude who can't land a job at any of the top AI companies and can code with Cursor and trying to pay their rent or medical bills will absolutely build AI for the military in return for having their rent paid.
This is regardless of whether it is in the US or China.
your opinion is defense contracts are bad
my opinion is defense contracts are good
who is correct? probably me since 99.9% of Googlers won’t leave over this
Probably.
https://xkcd.com/1170/
Although in the context of the parent comment, majority of Googlers probably aren't working on things directly related to controversial topics, instead they are probably working on mundane and non-external facing projects like "how do I migrate my libraries from this deprecated dependency to this other shiny new thing".
I can not believe what I am reading here, and how the single comment supporting defending one's country is so heavily downvoted. Qatar has poisoned Western online communities such that all defence of the United States is considered taboo? I don't even live in the US and I am frightened by what I see here.
The core of the issue about autonomous use of AI in mass surveillance of Americans and autonomous use of AI in automated weapons that make kill decisions. Anthropic is perfectly fine with working with the War Department and "defending one's nation".
But they are not okay with their AI being used to make a mockery of the 4th amendment and making automated kill/no-kill decisions about actual human lives.
This is just pigslop masquerading as a moral stand.
What happened to the OG Google that cared about users, prioritized honest search, fast performance, and didn't murder pages with ads?
Don't be evil.
Both companies (Google, OpenAI [0]) have defense contracts. At this point, the best course of action is to leave Google and OpenAI if you disagree with that (they won't).
[0] https://www.theguardian.com/technology/2025/jun/17/openai-mi...
Build, train, develop and maintain an AI for military if needed. When a government is scared of individuals they've clearly lost their edge.
Oh, wait...
This has been going on for a very long time (read what Smedley Butler said in "War is a Racket"), but after the Iraq War, the credibility of the US should be somewhere in hell.
Wouldn't it be more like he would leave on his own and the company would keep moving along? Why would they fire him?