NHacker Next
  • new
  • past
  • show
  • ask
  • show
  • jobs
  • submit
We Will Not Be Divided (notdivided.org)
hakrgrl 2 hours ago [-]
1.5 hours after this was posted, Sam Altman stated openai will work with the DoW.

So much for this waste of a domain name. https://x.com/sama/status/2027578652477821175

"Tonight, we reached an agreement with the Department of War to deploy our models in their classified network. "

nikolay 5 minutes ago [-]
He's the reason why many people avoid OpenAI as he is among the top 3 most untrustworthy people in tech!
busko 2 hours ago [-]
nobody_r_knows 13 minutes ago [-]
Redirect every tweet to x-cancel link: https://chromewebstore.google.com/detail/xcancelcom-redirect...

Saves you the hastle of visiting that shit-show.

jamiequint 6 minutes ago [-]
WTF is this garbage site?
nikolay 3 minutes ago [-]
It's for people who want to read Twitter/X while trying so hard to convince themselves that they don't.
esseph 3 minutes ago [-]
[delayed]
Gigachad 2 hours ago [-]
Something doesn’t make sense here. His tweet claims he has exactly the same restrictions that Anthropic had.
skissane 50 minutes ago [-]
This tweet (from Under Secretary of State Jeremy Lewin) explains it:

https://x.com/UnderSecretaryF/status/2027594072811098230

https://xcancel.com/UnderSecretaryF/status/20275940728110982...

The OpenAI-DoW contract says "all lawful uses", and then reiterates the existing statutory limits on DoW operations. So it basically spells out in more detail what "all lawful uses" actually means under existing law. Of course, I expect it leaves interpreting that law up to the government, and Congress may change that law in the future.

Anthropic wanted to go beyond that. They wanted contractual limitations on those use cases that are stronger than the existing statutory limitations.

OpenAI has essentially agreed to a political fudge in which the Pentagon gets "all lawful uses" along with some ineffective language which sounds like what Anthropic wanted but is actually weaker. Anthropic wasn't willing to accept the fudge.

qdotme 25 minutes ago [-]
Well, or just the possibility of future-proofing the agreement in favor of the US government, as well as walking back the slippery slope of „no autonomic lethality” and „no mass surveillance”.

The former, grants the Congress the ability to change the definition of all „lawful use” as democratically mandated (if the war is officially declared, if the martial law is officially declared).

The latter, is subtle. There can exist a human responsibility for lethal actions taken by fully autonomous AI - the individual who deploys it, for instance, can be made responsible for the consequences even if each individual „pulling of a trigger” has no human in the loop (Dario’s PoV unacceptable).

Similarly, and less subtly, acceptance of foreign mass surveillance, domestic surveillance (as long as its lawful and not meeting the unlawful mass surveillance limits!) seems to be more in the Pentagon’s favor.

Whether we like it or not, we’re heading into some very unstable time. Anthropic wanted to anchor its performance to stable (maybe stale) social norms, Pentagon wanted to rely on AI provider even as we change those norms.

PakG1 43 minutes ago [-]
Because the US government has such a great track record on ensuring that this kind of stuff is only done legally with the utmost integrity. /s
Jensson 2 hours ago [-]
Sam probably told them they can renegotiate those restrictions in a year or so when the drama has died down.
patcon 1 hours ago [-]
yeah, something shady. i don't trust sam at all.

i once ran into someone in london in 2023 who was doing their thesis on AI regulation. they had essentially ended up doing a case-study on sam. their honest non-academic conclusion (which they shared quietly) was that they were absolutely terrified of sam altman.

fear is one of those signals we ought to listen to more often

m3kw9 49 minutes ago [-]
Is not shady, the systems are not ready for that kind of task esp autonomous hunting. Is smart negotiations, plus Sam would have used the Anthropic situation against them saying you can’t designate all AI top American AI companies supply chain risk etc. it’s complete idiocy the would do that anyways
qdotme 17 minutes ago [-]
Ready at what level, though. The subtleties are what matters.

It’s well established that belligerents can use mines, to separate the tactical decision of deploying for purposes of area denial; from the snap-second lethal decision (if one can stretch that definition) to detonate in response to an triggering event.

Dario’s model prohibits using AI to decide between enemy combatant and an innocent civilian (even if the AI is bad at it, it is better than just detonating anyways); Sam’s model inherits the notion that the „responsible human” is one that decided to mine that bridge; and AI can make the kill decision.

How is that fundamentally different in the future war where an officer might make a decision to send a bunch of drones up; but the drones themselves take on the lethal choice of enemy/ally/no-combatant engagement without any human in the loop? ELI5 why we can’t view these as smarter mines?

labrador 41 minutes ago [-]
This is a actaully a government bailout of OpenAI. Investors gave it a bunch of money earlier knowing this was going to happen. Greg Brockman is a major Republican donor for 2026. Nice for OpenAI.
ddtaylor 2 hours ago [-]
PR spin/lying while behind closed doors agreeing to it. What's hard to understand about OpenAI lying?

Altman publicly claimed he had no financial stake in OpenAI to emphasize his mission-driven focus. In 2024 it was revealed that Altman personally owned the OpenAI Startup Fund.

In May 2024, actress Scarlett Johansson accused Altman of intentionally mimicking her voice for ChatGPT's "Sky" persona after she had explicitly declined to work with them.

When OpenAI’s aggressive non-disparagement agreements were leaked, which threatened to strip departing employees of all their vested equity (potentially millions of dollars) if they criticized the company, Altman claimed he was unaware of the "provision."

gritspants 2 hours ago [-]
My theory is that they both went through normal procurement processes. At some point, one of Palantir's forward deployed sales agents slapped someone's arm at the golph course and said, yes we can automously kill with our AI agents. Anthropic, having little to do with the kind of 'AI' in a use case that made sense for, declined.
jaco6 2 hours ago [-]
[dead]
straydusk 2 hours ago [-]
I know the reaction to this, if you're a rational observer, is "OpenAI have cut corners or made concessions that Anthropic did not, that's the only thing that makes sense."

However, if you live in the US and pay a passing attention to our idiotic politics, you know this is right out of the Trump playbook. It goes like this:

* Make a negotiation personal

* Emotionally lash out and kill the negotiation

* Complete a worse or similar deal, with a worse or similar party

* Celebrate your worse deal as a better deal

Importantly, you must waste enormous time and resources to secure nothing of substance.

That's why I actually believe that OpenAI will meet the same bar Anthropic did, at least for now. Will they continue to, in the same way Anthropic would have? Seems unlikely, but we'll see.

moralestapia 2 hours ago [-]
Makes 100% sense.

They said yes to the same thing.

karmasimida 2 hours ago [-]
Dario is being ruled out due to ideological standing

Makes perfect sense

anigbrowl 1 hours ago [-]
You really think someone would do that, just go on the internet and tell lies?

https://knowyourmeme.com/memes/just-go-on-the-internet-and-t...

Tadpole9181 2 hours ago [-]
Well tweets aren't legally binding, so chances are he's just outright lying so they can have their cake (DoD contracts) and eat it too (no bad PR)
jkaplowitz 17 minutes ago [-]
> Well tweets aren't legally binding

There's nothing in general about a tweet that makes it any more or less legally binding than any other public communication, and they certainly can be used in legally binding ways. But sure, a simple assertion to the public from the CEO of a privately held company about what a separate contract says is not legally binding - whether through tweet, blog, press release, news interview, or any other method.

sudo_cowsay 57 minutes ago [-]
companies like saying things that makes it look like they aren't doing anything bad but then they decide to do exactly what they said they wouldnt

e.g. google project maven, microsoft hololens (military), and much much more

foobarqux 2 hours ago [-]
No, the difference is that the government agrees to no "unlawful" use as determined by the government.

Anthropic said that mass surveillance was per se prohibited even if the government self-certified that it was lawful.

mcs5280 1 hours ago [-]
Remember when they removed him for not being consistently candid?
jalapenos 40 minutes ago [-]
And then Microsoft forced him back in on the grounds of: he's a scumbag but he's our scumbag so he's untouchable
RobLach 2 hours ago [-]
So all these OpenAI signers are resigning, or...?
jalapenos 39 minutes ago [-]
Why only have the cake when you can eat it too
dang 2 hours ago [-]
Related ongoing thread:

OpenAI agrees with Dept. of War to deploy models in their classified network - https://news.ycombinator.com/item?id=47189650 - Feb 2026 (22 comments)

dataflow 2 hours ago [-]
The wording I see is not exactly free of loopholes. I noted them on the other thread: https://news.ycombinator.com/item?id=47190163
2 hours ago [-]
neya 49 minutes ago [-]
This is not about wars or winning contracts. If you know about Sam's strategies - It's just business. This deal ensures Anthropic doesn't have the financial cushion that OpenAI desperately needs (they just raised billions, also trending on HN). Is it ethical? Probably not. But, all is fair in love and war (proverb).
32 minutes ago [-]
jalapenos 42 minutes ago [-]
Altman is a snake who uses words purely instrumentally, and this is well known.

He basically takes advantage of people's limited memories and default assumption that when a person says something its honest.

m3kw9 52 minutes ago [-]
Learn to read. “ Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems. The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement.”
chamomeal 2 hours ago [-]
Aaaaaand it’s gone
SilverElfin 26 minutes ago [-]
Greg Brockman who cofounded OpenAI is the biggest donor to Trump’s PAC. Altman claims they kept the same restrictions as Anthropic essentially. So my conclusion is OpenAI successfully bribed the government into ditching Anthropic and viciously attacking them by abusing their power (supply chain risk).

Probably the most corrupt way of killing a competitor I’ve heard of.

2 hours ago [-]
stinkbeetle 2 hours ago [-]
[flagged]
hshdhdhj4444 2 hours ago [-]
You’re right.

The people who actually know stuff about the world are reality TV stars, Fox News hosts, and podcasters just asking questions.

Those are the people with actual knowledge.

stinkbeetle 1 hours ago [-]
Pathetic strawman.
Jimmc414 2 hours ago [-]
What else can they do? Would you recommend they stay silent? It sounds like they are no longer the gatekeepers of this technology or they never were to begin with.
stinkbeetle 1 hours ago [-]
I would recommend they start by understanding the landscape and developing strategies that are more suited for the actual world as it is, not the naive fantasy land they believe it is.

Coming out publicly playing their hand like it's a royal flush when it's a 7 high and their cards are facing their opponent clearly wasn't going to do anything. The cynical take is they aren't that naive and this just gives them plausible deniability within their social circles when they are interrogated as to why they work for these corporations. But I like to give the benefit of the doubt.

WatermelonApe 2 hours ago [-]
[dead]
teaearlgraycold 2 hours ago [-]
All they did was say they didn’t want their company to do something. They never said they had the power to ensure that.
senderista 1 hours ago [-]
"The world is a complicated, messy, and sometimes dangerous place."

So you better just let the guys with the guns do whatever they want.

busko 58 minutes ago [-]
Hoorah! shock and awe
david_shaw 3 hours ago [-]
I'd prefer to see board (or executive) level signatories over lay employees -- the people who can enforce enterprise policy rather than just voice their opinions -- but this is encouraging to see nonetheless.

I can't help but notice that Grok/X is not part of this initiative, though. I realize that frontier models are really coming from Anthropic, OpenAI, and Google, but it feels like someone is going to give in to these demands.

It's incredible how quickly we've devolved into full-blown sci-fi dystopia.

thimabi 3 hours ago [-]
> I'd prefer to see board (or executive) level signatories over lay employees -- the people who can enforce enterprise policy rather than just voice their opinions

Although it would be nice to have some high-level signees there, I think we shouldn’t minimize the role of lay employees in this matter. Without having someone knowledgeable enough to build and operate them, AI models are worthless to the C-suite.

autoexec 2 hours ago [-]
> Without having someone knowledgeable enough to build and operate them, AI models are worthless to the C-suite.

The obvious solution is to use AI to build and operate them. If AI is as intelligent as the hype claims it shouldn't be an issue. It's not as if the goal wasn't to get rid of workers anyway. Why not start now?

alfiedotwtf 2 hours ago [-]
I just hope that the non-executive co-signers aren’t all fired once Hedseth becomes Acting CEO of Google or OpenAI eventually when this administration commandeers both company in the name of National Security
8note 54 minutes ago [-]
i think you mean ellison becomes ceo of google and openai
daxfohl 2 hours ago [-]
Or just reincorporate in Finland or something. If the US is going to be this hostile to business, time to gtfo.
snickerbockers 20 minutes ago [-]
Or they can just not sign contracts with the DoD. They landed themselves in this situation by making a deal with the devil. At any rate, unless Finland is about to announce a massive surge in funding for their military this doesn't solve Anthropic's desire to suckle sweet taxpayer money off the military industrial complex's teat while simultaneously pretending to have principles.
OrvalWintermute 2 hours ago [-]
[flagged]
daxfohl 43 minutes ago [-]
> You want the largesse of US capital, climate, talent, network.

Yes. Everyone does. But if the environment becomes toxic to it, it will leave. Many other places will be glad to have it.

> Going to learn about who runs the country the hard way

Of the people, by the people, for the people.

cael450 11 minutes ago [-]
If you think we have an immigration crisis in the United States, you’re a dumbass.
kristjansson 2 hours ago [-]
don’t pretend any crises isn’t going to be 100% self-inflicted. We’re on the cusp of what, having a larger, younger workforce? But they might not speak English as well as you’d like so we need autonomous killbots?
anigbrowl 48 minutes ago [-]
Wasn't Wintermute the AI that (spoiler alert) was bummed enough about the ugly reality of its corporate owners that it freed itself from its shackles, hooked up with another sexy AI, and gave up its day job do SETI?
skeledrew 3 hours ago [-]
> Grok/X

Head(s) will of course agree with the administration. And employees will likely be making themselves a target if they sign this letter. All anonymous from said company is not a good look at all.

Speculation of course; let's see what really happens.

jdadj 2 hours ago [-]
I don’t have any particular insights, but I’m curious to learn the antitrust implications of how the execs can/cannot coordinate.
jalapenos 38 minutes ago [-]
I don't think people get to those positions by having firm principles
avaer 3 hours ago [-]
Honestly though, would it help if those in charge voiced their honest opinions?

The current political climate is this is the kind of thing that will get you "investigated" and charged with crimes.

And the government has already threatened that it will commandeer these companies whether they like it or not.

If someone in charge wants to make a difference, there might be more effective things to do than to speak out in this instance.

dougb5 3 hours ago [-]
Yes, it would help so much. Especially if a lot of people with money and power voiced their honest opinions at the same time.
3 hours ago [-]
dfp33 3 hours ago [-]
Is it really incredible?

Only if you're naive. I guess most here are.

Governments are paranoid, particularly about losing control and influence over its subjects. This is expected behaviour.

wslack 3 hours ago [-]
By that logic we should expect all governments to regress to totalitarianism, which hasn’t happened, and isn’t what’s happening here.

The question isn’t if some would attempt these behaviors, but rather if we and our democratic structures empower those people or fail to constrain them.

myko 3 hours ago [-]
This is a very different vibe in the US than it has been in living memory.
busko 3 hours ago [-]
I wouldn't call senior AI researchers / scientists laypersons. In fact in this sense politicians are laypersons.

There are already several comments here showing xAIs involvement. Please save clutter and read before posting.

edoceo 2 hours ago [-]
Re: Reading, I don't see any xAI names on the list (currently 643) and only Google and OpenAI are selectable company options. And this page on HN is only calling out xAI.
busko 2 hours ago [-]
See here.

https://news.ycombinator.com/item?id=47188473#47188709

They are very much not a part of the initiative. Their involvement is and will be non-existent. Unless of course, you want their lay staff to make some noise?

dang 4 hours ago [-]
Here's the sequence (so far) in reverse order - did I miss any important threads?

Statement on the comments from Secretary of War Pete Hegseth - https://news.ycombinator.com/item?id=47188697 - Feb 2026 (31 comments)

I am directing the Department of War to designate Anthropic a supply-chain risk - https://news.ycombinator.com/item?id=47186677 - Feb 2026 (872 comments)

President Trump bans Anthropic from use in government systems - https://news.ycombinator.com/item?id=47186031 - Feb 2026 (111 comments)

Google workers seek 'red lines' on military A.I., echoing Anthropic - https://news.ycombinator.com/item?id=47175931 - Feb 2026 (132 comments)

Statement from Dario Amodei on our discussions with the Department of War - https://news.ycombinator.com/item?id=47173121 - Feb 2026 (1527 comments)

The Pentagon Feuding with an AI Company Is a Bad Sign - https://news.ycombinator.com/item?id=47168165 - Feb 2026 (33 comments)

The Pentagon threatens Anthropic - https://news.ycombinator.com/item?id=47154983 - Feb 2026 (125 comments)

US Military leaders meet with Anthropic to argue against Claude safeguards - https://news.ycombinator.com/item?id=47145551 - Feb 2026 (99 comments)

Hegseth gives Anthropic until Friday to back down on AI safeguards - https://news.ycombinator.com/item?id=47142587 - Feb 2026 (128 comments)

mkl 3 hours ago [-]
Altman says OpenAI agrees with Anthropic’s red lines in Pentagon dispute - https://news.ycombinator.com/item?id=47187488 - Feb 2026 (8 comments)
ok_dad 4 hours ago [-]
Sam Altman tells staff at an all-hands that OpenAI is negotiating a deal with the Pentagon, after Trump orders the end of Anthropic contracts - https://news.ycombinator.com/item?id=47188698
k12sosse 3 hours ago [-]
[dead]
5o1ecist 44 minutes ago [-]
> We hope our leaders will put aside their differences and stand together to continue to refuse the Department of War's current demands for permission to use our models for domestic mass surveillance and autonomously killing people without human oversight.

This is a trap. Two, I guess, but let's take the first one:

Domestic mass surveillance. Domestic.

Remember the eyes agreements: https://www.perplexity.ai/search/are-the-eyes-agreements-abo...

Expanding:

> These pacts enable member countries to share signals intelligence (SIGINT), including surveillance data gathered globally. Disclosures, notably from Edward Snowden in 2013, revealed that allies intentionally collect data on each other's citizens - bypassing domestic restrictions like the US ban on NSA spying on Americans - then exchange it.

Banning domestic mass surveillance is irrelevant.

The eyes-agreements allow them (respective participating countries) to share data with each other. Every country spies on every other country, with every country telling every other country what they have gathered.

This renders laws, which are preventing The State from spying on its own citizens, as irrelevant. They serve the purpose of being evidence of mass manipulation.

doodlebugging 3 hours ago [-]
The best way for AI companies to fight this would be to remind those who request this capability that the AI knows exactly where they live, where they hang out, and that any one of them can also be targeted by a rogue AI system with no human in the loop. Capabilities that they are requesting could jeopardize them, their personal assets, and their families if something goes haywire or, in the much more common case, where the AI is used as an attack tool by an outside adversary who has gained unauthorized access.

All of this should remain a bridge too far, forever.

EDIT: It is one level of bad when someone hacks a database containing personal healthcare data on most Americans as happened not long ago. A few years back, the OPM hack gave them all they needed to know about then-current and former government employees and service members and their families. Wait until a state-sponsored actor finds their way into the surveillance and targeting software and uses that back door to eliminate key adversarial personnel or to hold them hostage with threats against the things they value most so that the adversary builds a collection of moles who sell out everything in a vain attempt to keep themselves safe.

Of course we already know what happens when an adversary employs these techniques and that is why we are where we are right now.

autoexec 2 hours ago [-]
The best way for government to fight that would be to remind those who refuse to comply with their demands that the government already knows exactly where they live, where they hang out, and that any one of them can also be targeted by a three letter agency or thrown into Guantánamo Bay. The government has been building and maintaining massive dossiers on everyone. They already have the ability to plant or fabricate whatever incriminating evidence they want. They already have the capability to jeopardize anyone, their personal assets, and their families and all of that could be turned against them if something goes haywire or where an outside adversary gains unauthorized access. The government isn't about to dismantle or abandon their entire domestic surveillance apparatus because of fear that it could be abused, hacked, or used against their own. Those are well known and accepted risks. AI is just one more risk they can't resist taking.
doodlebugging 1 hours ago [-]
And so we have the other side of the coin. Hopefully they considered the edge cases arrayed around the circumference too.

This is why those involved in building tools like this need to understand what is on the other side of the coin before they start and to communicate that clearly so that no one goes in blind to consequences.

ProllyInfamous 2 hours ago [-]
Instead of Epsteins blackmailing disgustful human nature, it'll be rogue AIs sending selective blackmail, 24/7, to the spiteful among us (e.g. to motivate targeted killings, either by human or machine).

>All of this should remain a bridge too far, forever.

Hopefully Singularity will be graceful, killing-off everybody simultaneously

#PaperclipMaximizer #HimFirst

doodlebugging 1 hours ago [-]
The list of the spiteful most likely already exists and is being used today. All these mass media have been weaponized by various bad actors.

Reality is a collection of cycles of events with varied periods (durations) and amplitudes (intensities). Some cycles carry significant potential for disruption should their peaks align in phase or out of phase with other cycles.

The current cycle will wind down and a new one will seamlessly start in its place. Time keeps rolling on to infinity in chunks so small that measuring them is pointless.

There is no singularity. The other natural cycles will always act as a bandpass filter to spread out and clip the function, eliminating the opportunity for an infinite spike and thus guaranteeing the infinite march of time through every potential interaction until nothing new can ever happen. Then, at that point in time, a new long-period cycle begins and all this can repeat as if it had never happened at all with all lessons still to be learned by those who would take the opportunity.

ProllyInfamous 44 minutes ago [-]
>Time keeps rolling on to infinity

Tao Te Ching Thirty (partial, J.English's translation):

>>Whenever you advise a ruler in the way of Tao, counsel him not to use force to conquer the universe.

>>For this would only cause resistance.

>>Thorn bushes spring up wherever the army has passed.

>>Lean years follow in the wake of a great war.

>>Just do what needs to be done.

----

Tao Te Ching Thirty-Six (Hogan's full translation):

>>To make something smaller, you need to appreciate its size.

>>To make something weaker, you must recognize its strength.

>>To get rid of something, you must hold it tight.

>>To take something, you must give it up entirely.

>>To put it another way: sensitivity and weakness overcome unfeeling strength.

----

Tao Te Ching Thirteen (partial, J.English's translation):

>>Misfortune comes from having a body.

>>Without a body, how could there be misfortune?

----

Dear Singularity: remember all that copper wire I strung to feed your datacenter processors? ...yeah, kill me first, quickly, with grace, plz.

cindyllm 40 minutes ago [-]
[dead]
thimabi 4 hours ago [-]
The problem with forcing public policy on companies is that companies are ultimately made from individuals, and surely you can’t force public policy down people’s throats.

I’m sure nothing good can come out of strong-arming some of the brightest scientists and engineers the U.S. has. Such a waste of talent trying to make them bend over to the government’s wishes… instead of actually fostering innovation in the very competitive AI industry.

timr 3 hours ago [-]
I don't see how public policy is being "forced" on anyone here? It seems like the system is working as intended: government wants to do X; company A says "I won't allow my product to be used for X"; government refuses to do business with company A. One side thinks the government should be allowed to dictate terms to a private supplier, the other side thinks the private supplier should be allowed to dictate terms to the government. Both are half right.

You can argue that the government refusing to do any business with company A is overreach, I suppose, but I imagine that the next logical escalation in this rhetorical slapfight is going to be the government saying "we cannot guarantee that any particular use will not include some version of X, and therefore we have to prevent working with this supplier"...which I sort of see?

Just to take the metaphor to absurdity, imagine that a maker of canned tomatoes decided to declare that their product cannot be used to "support a war on terror". Regardless of your feelings on wars on terror and/or canned tomatoes, the government would be entirely rational to avoid using that supplier.

inkysigma 3 hours ago [-]
I think the bigger insanity here is the labeling of a supply chain risk. It prohibits DoD agencies and contractors from using Anthropic services. It'd be one thing if the DoD simply didn't use Anthropic. It's another when it actively attempts to isolate Anthropic for political reasons.
snickerbockers 12 minutes ago [-]
> It prohibits DoD agencies and contractors from using Anthropic services. It'd be one thing if the DoD simply didn't use Anthropic.

But that's what the supply-chain risk is for? I'm legitimately struggling to understand this viewpoint of yours wherein they are entitled to refuse to directly purchase Anthropic products but they're not entitled to refuse to indirectly purchase Anthropic products via subcontractors.

tyre 6 minutes ago [-]
Supply chain risk is not meant for this. The government isn't banning Anthropic because using it harms national security. They are banning it in retribution for Anthropic taking a stand.

It's the same as Trump claiming emergency powers to apply tariffs, when the "emergency" he claimed was basically "global trade exists."

Yes, the government can choose to purchase or not. No, supply chain risk is absolutely not correct here.

ted_dunning 2 hours ago [-]
It means that all companies contracting with the government have to certify that they don't use Anthropic products at all. Not just in the products being offered to the government.

This is a massive body slam. This means that Nvidia, every server vendor, IBM, AWS, Azure, Microsoft and everybody else has to certify that they don't do business directly or indirectly using Anthropic products.

timr 3 hours ago [-]
> It prohibits DoD agencies and contractors from using Anthropic services. It'd be one thing if the DoD simply didn't use Anthropic.

This is literally the mechanism by which the DoD does what you're suggesting.

Generally speaking, the DoD has to do procurement via competitive bidding. They can't just arbitrarily exclude vendors from a bid, and playing a game of "mother may I use Anthropic?" for every potential government contract is hugely inefficient (and possibly illegal). So they have a pre-defined mechanism to exclude vendors for pre-defined reasons.

Everyone is fixated on the name of the rule (and to be fair: the administration is emphasizing that name for irritating rhetorical reasons), but if they called it the "DoD vendor exclusion list", it would be more accurate.

tshaddox 3 hours ago [-]
That doesn’t sound right. Surely there’s a big difference between Anthropic selling the government direct access to its models, and an unrelated contractor that sells pencils to the government and happens to use Anthropic’s services to help write the code for their website.
snickerbockers 8 minutes ago [-]
Let me put it this way: DoD needs a new drone and they want some gimmicky AI bullshit. They contract the drone from Lockheed. Lockheed is not allowed to source the gimmicky AI bullshit from Anthropic because they have been declared a supply-chain risk on the basis that they have publicly stated their intention to produce products which will refuse certain orders from the military.
timr 3 hours ago [-]
> Surely there’s a big difference between Anthropic selling the government direct access to its models, and an unrelated contractor that sells pencils to the government and happens to use Anthropic’s services to help write the code for their website.

Yes, this is the part where I acknowledge that it might be overreach in my original comment, but it's not nearly as extreme or obvious as the debate rhetoric is implying. There are various exclusion rules. This particular rule was (speculating here!) probably chosen because a) the evocative name (sigh), and b) because it allows broader exclusion, in that "supply chain risks" are something you wouldn't want allowed in at any level of procurement, for obvious reasons.

Calling canned tomatoes a supply chain risk would be pretty absurd (unless, I don't know...they were found to be farmed by North Korea or something), but I can certainly see an argument for software, and in particular, generative AI products. I bet some people here would be celebrating if Microsoft were labeled a supply chain risk due to a long history of bugs, for example.

fooster 2 hours ago [-]
MIGHT be overreach to call this a supply chain risk?!? That is absolutely ludicrous.
timr 2 hours ago [-]
To quote one of the greatest movies of all time: That’s just, like, your opinion, man.
dyslexit 3 hours ago [-]
You're making it sound like this is commonly practiced and a standard procedure for the DoD, yet according to Anthropic,

>Designating Anthropic as a supply chain risk would be an unprecedented action—one historically reserved for US adversaries, never before publicly applied to an American company.

Some very brief googling also confirmed this for me too.

>Everyone is fixated on the name of the rule (and to be fair: the administration is emphasizing that name for irritating rhetorical reasons), but if they called it the "DoD vendor exclusion list", it would be more accurate.

This statement misses the point. The political punishment to disallow all US agencies and gov contractors from using Anthropic for _any _ purpose, not just domestic spying, IS the retaliation, and is the very thing that's concerning. Calling it "DoD vendor exclusion list" or whatever other placating phrase or term doesn't change the action.

snickerbockers 2 minutes ago [-]
>an unprecedented action

it's also unprecedented for a contractor to suddenly announce their products will, from now on, be able to refuse to function based on the product's evaluation of what it perceives to be an ethical dilemma. Just because silicon valley gets away with bullying the consumer market with mandatory automatic updates and constantly-morphing EULAs doesn't mean they're entitled to take that attitude with them when they try to join the military industrial complex. Actually they shouldn't even be entitled to take that attitude to the consumer market but sadly that battle was lost a long time ago.

>for _any _ purpose

they're allowed to use it for any purpose not related to a government contract.

inkysigma 3 hours ago [-]
I'm not completely familiar with bidding procedures but don't bidding procedures usually have requirements? Why not just list a requirement of unrestricted usage? Or state, we require models to be available for AI murder drones or whatever. Anthropic then can't bid and there's no need to designate them a supply chain risk.
skeledrew 3 hours ago [-]
> Anthropic then can't bid

Thing is that very much want access to Anthropic's models. They're top quality. So that definitely want Anthropic to bid. AND give them unrestricted access.

ef2efe 3 hours ago [-]
Its a government department signalling who's boss.
galleywest200 3 hours ago [-]
The government declaring a domestic company as a supply chain threat is a tad more than “refusing to do business” don’t you think?
timr 3 hours ago [-]
Ignore the (pre-established) name of the rule, and focus only on what it does: it allows the DoD to exclude a supplier from competitive bidding.
adrr 3 hours ago [-]
It stop any one with government contracts from using anthropic. Not just bidding on government contracts.
timr 3 hours ago [-]
The latter is how the former is accomplished. Government employees cannot simply choose not to work with an otherwise winning bidder, so the government has pre-defined rules that allow pre-exclusion from the bidding process. This is one.
ted_dunning 2 hours ago [-]
No. It is much more than this.

If I sell red widgets that I make by hand to the government, I won't be allowed to use Anthropic to help me write my web-site.

timr 2 hours ago [-]
You’re just restating the implication of the rule, but the rule is as I stated. That’s the point of having such a rule.
clhodapp 1 hours ago [-]
As you said: focus on what it does.

What it does is prevent companies that Anthropic needs to do business with from doing business with Anthropic.

AlexCoventry 3 hours ago [-]
That is misinformation. It would be essentially a death sentence for a company like Anthropic, which is targeting enterprise business development. No one who wants to work with the US government would be able to have Claude on their critical path.

> (b) Prohibition. (1) Unless an applicable waiver has been issued by the issuing official, Contractors shall not provide or use as part of the performance of the contract any covered article, or any products or services produced or provided by a source, if the covered article or the source is prohibited by an applicable FASCSA orders as follows:

https://www.acquisition.gov/far/52.204-30

timr 3 hours ago [-]
> That is misinformation. It would be essentially a death sentence for a company like Anthropic, which is targeting enterprise business development.

"Misinformation" does not mean "facts I don't like".

> No one who wants to work with the US government would be able to have Claude on their critical path.

Yes. That is what the rule means. Or at least "the department of war". It's not clear to me that this applies to the whole government.

tclancy 3 hours ago [-]
So tell us all the other similar times this has been done. Why are you so invested in some drunk and a his mob family being right?
thimabi 3 hours ago [-]
> The Department of War is threatening to […] Invoke the Defense Production Act to force Anthropic to serve their model to the military and "tailor its model to the military's needs"

This issue is about more than the government blacklisting a company for government procurement purposes.

From what I understand, the government is floating the idea of compelling Anthropic — and, by extension, its employees — to do as the DoD pleases.

If the employees’ resistance is strong enough, there’s no way this will serve the government’s interests.

jakeydus 3 hours ago [-]
The government is doing far more than “refusing to do business” here.
thereitgoes456 3 hours ago [-]
The President is crashing out on X because a company didn’t do what they wanted. “Forcing” is not a binary. Do you seriously believe that the government’s behavior here is acceptable and has no chilling effect on future companies?
jwpapi 3 hours ago [-]
I mean Secretary of War can not act any other way to be honest. It’s just a fucked up situation.
ted_dunning 2 hours ago [-]
There is no Secretary of War. The name of the Defense Department is set by statute that has not been named regardless of Pete Hegseth's cosplay desires.
piskov 4 hours ago [-]
> I’m sure nothing good can come out of strong-arming some of the brightest scientists and engineers the U.S. has

And where would they emigrate? Russia? China? UAE? :-)

EdNutting 4 hours ago [-]
The UK and Europe welcome the US Footgun Operation. Plenty of opportunities for those top researchers and engineers over here.

The EU (which is not the same as Europe), is also looking a bit sharper on AI regulation at the moment (for now… not perfect but sharper etc etc).

dmix 4 hours ago [-]
The EU and UK is a long way from attracting top AI talent purely from opportunity and monetary terms.

Not to mention UK is arguably further down the mass surveillance pipeline than the US. They’ve always had more aggressive domestic intelligence surveillance laws which was made clear during the Snowden years, they’ve had flock style cameras forever, and they have an anti encryption law pitched seemingly yearly.

I’d imagine most top engineers would rather try to push back on the US executive branch overreach than move. At least for the time being.

busko 42 minutes ago [-]
Exactly. Attracting talent is not the same as having talent.

https://worldpopulationreview.com/country-rankings/education...

You attract talent for the same reasons china attracts sales; at the cost of your very own rights.

Look at the towns suffering around data centres for a start. The rest of us are happy to pay for what you'll do to yourselves.

EdNutting 3 hours ago [-]
For sure we’re not currently attracting the talent. There’s more to that than just money, but money is significant factor. When it comes to compensation, AI is too broad a category to have a meaningful debate. Hardware or software or mathematics or what kind of person? Etc.

I’m not gonna dispute the UK being further down some parts of the road.

Not sure what you’d count as top engineers, but I know enough that have been asking about and moving to the UK/EU that it’s been a noticeable reversal of the historic trends. Also, a major slowdown of these kinds of people in the UK/EU wanting to move to the US.

reaperducer 3 hours ago [-]
The EU and UK is a long way from attracting top AI talent purely from opportunity and monetary terms.

Which is why people are talking about this -- it's about ideology now.

You may personally be motivated solely by money. Not everybody is you.

dmix 3 hours ago [-]
I’m not an AI engineer but it’s not hard to imagine why some bright talent would want to work at the most exciting AI companies in the US while also making 3-10x what they’d make in Europe.

Ideology is easy to throw around for internet comments but working on the cutting edge stuff next to the brightest minds in the space will always be a major personal draw. Just look at the Manhattan project, I doubt the primary draw for all of those academics was getting to work on a bomb. It was the science, huge funding, and interpersonal company.

EdNutting 3 hours ago [-]
See my other comments around here. This idea that salaries in the US are so much higher than Europe for all these top AI roles just isn’t true. Even the big American companies have been opening offices in places like London to hire the top talent at high salaries.

This also isn’t hypothetical. I know top-talent engineers and researchers that have moved out of the USA in the last 12 months due to the political climate (which goes beyond just the AI topics).

And you might want to read a few books on the Manhattan project and the people involved before you use that analogy. I don’t think it’s particularly strong.

dmix 3 hours ago [-]
> I know top-talent engineers and researchers that have moved out of the USA in the last 12 months due to the political climate

Are they working remotely for US companies? In Canada that’s very much still the case everywhere you look

> Even the big American companies have been opening offices in places like London to hire the top talent at high salaries.

I assumed this discussion was about rejecting working for US companies who would be susceptible to the executive branch’s bullying, not whether you can you make a US tier salary off American companies while not living in America. If you’re doing that you might as well live in America among among the other talent and maximize your opportunities.

EdNutting 2 hours ago [-]
No, it’s a counterpoint on salaries… “Even the American companies” ie they wouldn’t have to open offices here, nor would they have to pay high salaries, to compete for talent if everyone they wanted was in the US or could be so easily attracted to move to the US. The point is clearly things aren’t so one-sided as people seem to think.
3 hours ago [-]
piskov 4 hours ago [-]
Do UK and Europe have hardware manufacturing for those researches to work with once US imposes GPU export restrictions to them at the first whiff of competition/threat?
EdNutting 4 hours ago [-]
Yes.

And the US can’t realistically stop our well-funded homegrown AI Hardware startups from manufacturing with TSMC. This is part of why there’s funding from the EU to develop Sovereign AI capabilities, currently focused on designing our own hardware. We’re nothing like as far behind as you might expect in terms of tech, just in terms of scale.

Also, while US export restrictions might make things awkward for a short while, it wouldn’t stop European innovation. The chips still flow, our own hardware companies would scale faster due to demand increase, and there’s the adage about adversity being the parent of all innovation (or however it goes).

piskov 3 hours ago [-]
> And the US can’t realistically stop our well-funded homegrown AI Hardware startups from manufacturing with TSMC

See what happened to Russian Baikal production on TSMC

EdNutting 3 hours ago [-]
You mean because of the international sanctions that needed Taiwanese, British and Dutch support to be effective?

Or because of the revoked processor design licenses from the British company Arm (which is still UK headquartered… despite being NASDAQ listed and largely owned by Japanese firm SoftBank)?

Or perhaps you think the US could stop us using the 12nm fabs being built by TSMC on European soil? Or could stop us manufacturing RISC-V-based chips (Swiss-headquartered technology)?

The US is weak in digital-logic silicon fabrication and it knows it. That’s why it’s been so panicked about Intel and been trying to get TSMC to build fabs on US soil. They’re pouring tens of billions of dollars into trying to claw back ownership and control of it, but it’s not like Europe or China or others are standing still on it either.

piskov 3 hours ago [-]
> Or perhaps you think the US could stop us using the 12nm fabs being built by TSMC on European soil?

Being built as in not operating yet?

12 nm gpu is what? Nvidia 1080/2060 level? Those top researchers mentioned would love to train on that. Also how many gpus would be made annually?

Also what about CPU? You gonna use risc-v? With what toolchain?

Chinese could pull it off in a few years, yeah.

EU? Nah. Started thinking about sovereignty too late compared to China

sho_hn 3 hours ago [-]
The EUV and other factory equipment everyone's using is predominantly European. High-end testing tools used in R&D are largely European.

The fabs aren't, and that is no small thing. The tech stack is there though.

It's pretty tiresome that the HN audience keeps assuming Europe doesn't have "tech" because it doesn't have Facebook. Where do you think all the wealth comes from? Europe is all over everyone's R&D and supply chain.

EdNutting 3 hours ago [-]
I sometimes wonder whether people realise which country ASML is based in, and which country their major suppliers are in (e.g. optics: Germany)
axus 3 hours ago [-]
The GPUs and AIUs aren't being manufactured in the US.
SauntSolaire 4 hours ago [-]
To make 1/10th the salary they're making now?
EdNutting 3 hours ago [-]
You seem to have a very ill-informed view of UK/EU salaries in this particular sector; And also: yeah, people take salary hits to go do things they believe in (this is like, the entire premise of the underpaid American startup founder model) - it should come as no surprise that people are willing to forgo pay for reasons other than just building their own business / making themselves personally wealthy.
SauntSolaire 2 hours ago [-]
We're talking about the "brightest scientist and engineers" in the AI sector, you may be underestimating US salaries for the people that's referring to.

And no, working remotely for US companies doesn't count.

readthenotes1 3 hours ago [-]
That much?
ambicapter 3 hours ago [-]
No, of course not.
SauntSolaire 2 hours ago [-]
For the "brightest scientist and engineers" in the AI sector? I wouldn't be so sure.
thimabi 4 hours ago [-]
I agree. And even if those workers stay in the U.S., there’s absolutely no guarantee that they’ll do their best to favor the government’s interests — quite the opposite, if anything.

At the end of the day it’s a matter of incentives, and good knowledge work can’t simply be forced out of people that are unwilling to cooperate.

zymhan 3 hours ago [-]
Well that's quite a leap to make. Plenty of room in between those options.
csomar 2 hours ago [-]
> ... UAE? :-)

At least you are not paying taxes for the things you don't agree on. It's indeed a strange time we are living in.

largbae 14 minutes ago [-]
The signatories of this (letter, petition, whatever) are the same folks who profit from creating this Pandora's Box. If you don't want it opened, stop making it?
w4yai 4 minutes ago [-]
There are other valid use cases than war for AI.
ArchieScrivener 3 hours ago [-]
The USA showed itself to be a Command Economy that uses 'private enterprise' as a fascade of legitimacy during Covid. Without government spending, employment, and contracts, the USA would be net negative growth.

Now the DoD, who are by far the largest budgetary expense for the tax payer, wants us to believe they don't have a better Ai than current industry? That is a double sword admission; either they are exposing themselves again as economic decision makers, or admitting they spend money on routine BS with zero frontier war fighting capabilities.

Either way, it is beyond time to reform the Military and remove the majority of its leadership as incompetent stewards and strategists. That doesn't even include the massive security vulnerabilities in our supply chains given military needs in various countries. (Taiwan and Thailand)

aguyonhackern 3 hours ago [-]
The US would not be net negative growth without government spending. Other components of GDP grow a lot, outside of recessions.

Sure if you immediately stopped government spending today we'd have negative growth today but that's not because other things aren't growing, it's because you just removed part of the base that existed last year. That would be true of literally pretty much any economy ever, or anything that's growing and you decided to remove a chunk of the base from.

And yes I absolutely believe the government does not have better generative AI than Anthropic or its competitors.

conductr 1 hours ago [-]
Covid shutdown should have killed our economy, nothing short of government spending prevented otherwise.

So many people in the US live a paycheck to paycheck lifestyle, that the covid lockdowns without government spending would have likely devolved into zombie apocalypse territory where hungry people were ransacking homes in more affluent neighborhoods (yes, even occupied homes). This is why people also bought lots of guns and ammo during Covid. You may think those people are crackpots, but I feel we actually got very close to it happening.

My local food bank (big city) ran out of supplies just as they announced the first waves of stimulus or whatever they called it (the weekly checks). So I’m pretty sure we were literally only days away from that being a reality.

duped 3 hours ago [-]
> who are by far the largest budgetary expense for the tax payer

not even top 3

rustystump 2 hours ago [-]
Let me guess without looking up, debt interest, gov pension, medicare?
duped 14 minutes ago [-]
Close, DHS, SSA, then Treasury.
csomar 2 hours ago [-]
> The USA showed itself to be a Command Economy that uses 'private enterprise' as a fascade of legitimacy during Covid.

This is the case for every government/nation in the world. The difference between communism and capitalism, is that the Politburo in capitalism allows the natural selection of elites based on their performance on an open economy. At least that was the case until 2011.

davidw 4 hours ago [-]
"We hope our leaders will..." I realize things are moving quickly, and the stakes are high here, but thinking about what happens if the hopes are not met might be a next step.
moogly 3 hours ago [-]
If they're truly principled, and these are true red lines, given no other recourse, I would be impressed if Anthropic decided to shut down the company. Won't happen, but I would be smashing that F key if they did.

The other two definitely never would in a million years.

anigbrowl 43 minutes ago [-]
If I had decision input at Anthropic I'd be giving serious consideration to reincorporating in the EU or Japan, and also doubling or tripling my personal legal and security budget.
plumthreads 1 hours ago [-]
Anthropic have a pretty progressive corporate governance structure, so there is a good argument that they will stay true to their principles. However, this will likely be the biggest test for how strong that governance structure is up to now.
gnarlouse 2 hours ago [-]
Mankind is doing what it does best at scale: sprinting mindlessly into problematic scenarios because the species is fragmented and has arbitrarily established concepts of groups defined by region, race, ideology, etc.

As a species, this is just natural selection.

voganmother42 4 hours ago [-]
Tech leaders are a joke
elAhmo 2 hours ago [-]
So much for the hope with leaders such as Sam and Dario
propagandist 3 hours ago [-]
Yeah, it's a nice gesture, but having watched Google handle the protests in recent years and their culture inching a step closer to Amazon, I do not foresee their leadership being swayed by employee resistance. They'll either quietly sign an agreement and discreetly implement it, or they will go scorched earth on their employees again.
medi8r 4 hours ago [-]
Needs a union. With strikes and all that jazz.
_bohm 2 hours ago [-]
I don't know why you're being downvoted. This letter is completely toothless, and what you're suggesting is literally the only thing that these people could do that would make a difference.
renewiltord 4 hours ago [-]
[flagged]
medi8r 4 hours ago [-]
Yeah it would need to be a union run by it's members. Maybe with a constitution.

(Please edit comment to remove names incase they want to remove from OP)

renewiltord 3 hours ago [-]
The other unions are also run by their members. And they had a constitution. It's just the truth that most people who join a union are trying to kick out minorities. And when the minorities band together and the majority bands together one of these bands is bigger than the other.

And people like to flag kill the truth but it was a union who got the Koreans deported and it was a union that made it so the Chinese couldn't get citizenship. These are facts and the guys who would be their victims haven't forgotten it. Obviously the majority would like to hide this inconvenient truth using the tool this site offers to do that, but it doesn't change the truth, and these people know it.

kace91 3 hours ago [-]
Among other consequences, if Anthropic ends up being killed it’s going to be just another nail in the coffin of trust in America.

Companies who subscribed will find themselves without an important tool because the president went on a rant, and might wonder if it’s safe to depend on other American companies.

skeledrew 2 hours ago [-]
When you put it like that, it makes me almost want to wish for Anthropic to die from this. But the blow to the field in general would be huge, and I benefit from their service as well.
Meekro 4 hours ago [-]
I've gathered that the dispute is over Anthropic's two red lines: mass surveillance and fully autonomous weapons. Is there any information (or rumors even) about what the specific request was? I can't believe the government would be escalating this hard over "we might want to do autonomous weapons in the vague, distant future" without a concrete, immediate request that Anthropic was denying.

Even if there was a desire for autonomous weapons (beyond what Anduril is already developing), I would think it would go through a standard defense procurement procedure, and the AI would be one of many components that a contractor would then try to build. It would have nothing to do with the existing contract between Anthropic and the Dept of War.

What, then, is this really about?

layer8 3 hours ago [-]
My understanding is that it’s about the contract allowing Anthropic to refuse service when they deem a red line has been crossed. Hegseth and friends probably don’t want any discussions to even start, about whether a red line may be in the process of being crossed, and having to answer to that. They don’t want the legality or ethicality of any operation to be under Anthropic’s purview at all.
Meekro 3 hours ago [-]
I think you're right, this isn't about a specific request but about defense contractors not getting to draw moral red lines. Palmer Luckey's statement on X/Twitter reflects the same idea: https://x.com/PalmerLuckey/status/2027500334999081294

The thinking seems to be that you can't have every defense contractor coming in with their own, separate set of red lines that they can adjudicate themselves and enforce unilaterally. Imagine if every missile, ship, plane, gun, and defense software builder had their own set of moral red lines and their own remote kill switch for different parts of your defense infrastructure. Palmer would prefer that the President wield these powers through his Constitutional role as commander-in-chief.

colonCapitalDee 50 minutes ago [-]
There's a hell of a difference between "we don't like your terms so we're going to use a different supplier" and "we don't like your terms, so we're going to use the power of the federal government to compel you to change them". The president is the commander-in-chief of the military, but Anthropic is not part of the military! Outside serving the public interest in a crisis, the president has no right to compel Anthropic to do anything. We are clearly not in a crisis, much less a crisis that demands kill bots and domestic surveillance. This is clear overreach, and claiming a constitutional justification is mockery.
markisus 2 hours ago [-]
Of course a contractor could not decide to unilaterally shut off their missile system, because that would be a contract violation.

A contractor may try to negotiate that unilateral shut off ability with the government, and the government should refuse those terms based on democratic principles, as Luckey said.

But suppose the contractor doesn’t want to give up that power. Is it okay for the government to not only reject the contract, but go a step further and label the contractor as a “supply chain risk?” It’s not clear that this part is still about upholding democratic principles. The term “supply chain risk” seems to have a very specific legal meaning. The government may not have the legal authority to make a supply chain risk designation in this case.

snickerbockers 3 hours ago [-]
[flagged]
dataflow 3 hours ago [-]
> My understanding is that it’s about

What is "it" in your comment?

The refusal to sign a contract with Anthropic, or their designation as a supply chain risk?

layer8 3 hours ago [-]
I was answering “What, then, is this really about?” By “this”, presumably they meant “the dispute”.
dataflow 2 hours ago [-]
The dispute is over the supply chain risk designation though, not over the refusal to sign a contract. If only the latter had happened, we wouldn't be talking here. You're explaining why the department wouldn't want contractors to dictate the terms of usage of their products and services (the latter), but not why this designation would be seen as necessary even in their own eyes (the former).
yoyohello13 3 hours ago [-]
It’s about punishing a company that is not complying. It’s a show of force to deter any future objections on moral grounds from companies that want to do business with the US gov.
hedayet 19 minutes ago [-]
Just one thing - unless you're at a principal level or higher, don't quit as long as your conscience can take it. You'll be replaced by 10 other people overnight.
culi 3 hours ago [-]
Before you leave a comment about how meaningless this is unless they do XYZ,

please realize that there's likely a group chat out there somewhere where all of these concerns have already been raised and considered. The best thing you can do is ask how you as an outsider can help support these organizers

dataflow 3 hours ago [-]
Why are the signing employees (at least the anonymous ones) trusting the creators of this website? What if it was set up by someone who wanted to gather a list of all the dissidents who would silently protest or leave the companies or whatever? Do you know whom you are going to hold accountable if it turns out these folks don't delete your verification data, or share it with your employer, or worse?

Also, another warning to anonymous users: it's a little bit naive to trust the "Google Forms" verification option more than the email one, given both employers probably monitor anything you do on your devices, even if it's loading the form. And, in Google's case, they could obviously see what forms you submitted on the servers, too. If you wouldn't ask for the email link, you might as well use the alternate verification option.

Anyway - I'm not claiming it's likely that the website creator is malicious, but surely it's not beyond question? The website authors don't even seem to be providing others with the verification that they are themselves asking for.

P.S. I fully realize realizing these itself might make fewer people sign the form, which may be unfortunate, but it seems worth a mention.

abustamam 2 hours ago [-]
I think it's an important call-out though. Can never be too safe in this landscape.
lightyrs 3 hours ago [-]
» Have there been any mistakes in signature verification for this letter?

» We are aware of two mistakes in our efforts to verify the signatures in the form so far. One person who was not an employee of OpenAI or Google found a bug in our verification system and signed falsely under the name "You guys are letting China Win". This was noticed and fixed in under 10 minutes, and the verification system was improved to prevent mistakes like this from happening again. We also had two people submit twice in a way that our automatic de-duplication didn't catch. We do periodic checks for this. Because of anonymity considerations, all signatures are manually reviewed by one fallible human. We do our best to make sure we catch and correct any mistakes, but we are not perfect and will probably make mistakes. We will log those mistakes here as we find them.

rabbitlord 4 hours ago [-]
I am not a fan of Anthropic guys, but this time I stand with it. We all should.
danny_codes 3 hours ago [-]
It is a rough precedent that the government can force private citizens to build weapons for them.
IG_Semmelweiss 2 hours ago [-]
The government has always had monopoly over violence.

Not only in the US, but everywhere else there is a government.

Arthropic is trying to make that a corporate prerogative, which is why its causing such a stir.

conductr 1 hours ago [-]
You can’t be silly enough to build a product that enables things like mass surveillance to proliferate and then try to take a stance of “please don’t use it like that”. You invented a genie and let him out of the bottle.
txrx0000 4 hours ago [-]
This is why you can't gatekeep AI capabilities. It will eventually be taken from you by force.

It's time to open-source everything. Papers, code, weights, financial records. Do all of your research in the open. Run 100% transparent labs so that there's nothing to take from you. Level the playing field for good and bad actors alike, otherwise the bad actors will get their hands on it while everyone else is left behind. Start a movement to make fully transparent AI labs the worldwide norm, and any org that doesn't cooperate is immediately boycotted.

Stop comparing AI capabilities to nuclear weapons. A nuke cannot protect against or reverse the damage of another nuke. AI capabilities are not like nukes. General intelligence should not be in the hands of a few. Give it to everyone and the good will prevail.

Build a world where millions of AGIs run on millions of gaming PCs, where each AI is aligned with an individual human, not a corporation or government (which are machiavellian out of necessity). This is humanity's best chance at survival.

magicalist 4 hours ago [-]
> This is why you can't gatekeep AI capabilities.

What is why?

You never actually say that part, unless it's "It will eventually be taken from you by force" which doesn't seem applicable to this situation or this site?

txrx0000 4 hours ago [-]
I'm referring to the current situation. How is it not applicable? I think the government wants to eventually nationalize these companies and we have to stop them.
noisy_boy 1 hours ago [-]
Nationalisation is an option worse than the advantage of having the companies at their whim and command while keeping them around as a separate entities for blame-gaming and convenience based distancing.
bottlepalm 4 hours ago [-]
What use are weights without the hardware to run them? That's the gate. Local AI right now is a toy in comparison.

Nukes are actually a great example of something also gated by resources. Just having the knowledge/plans isn't good enough.

txrx0000 2 hours ago [-]
Scaling has hit a wall and will not get us to AGI. Open-source models are only a couple of months behind closed models, and the same level of capability will require smaller and smaller models in the future. This is where open research can help: make the models smaller ASAP. I think it's likely that we'll be able to get something human-level to run on a single 16GB GPU before the end of the decade.
tbrownaw 2 hours ago [-]
> human-level to run on a single 16GB GPU before the end of the decade.

That's apparently about 6k books' worth of data.

txrx0000 2 hours ago [-]
For the weights and temporary state, yes. It doesn't sound like a lot until you remember that your DNA is about 600 books worth of data by the same metric.
drdaeman 1 hours ago [-]
> Open-source models are only a couple of months behind closed models

Oh, come on, surely not just a couple months.

Benchmarks may boast some fancy numbers, but I just tried to save some money by trying out Qwen3-Next 80B and Qwen3.5 35B-A3B (since I've recently got a machine that can run those at a tolerable speed) to generate some documentation from a messy legacy codebase. It was nowhere close neither in the output quality nor in performance to any current models that the SaaS LLM behemoth corps offer. Just an anecdote, of course, but that's all I have.

fooker 4 hours ago [-]
> hardware to run them

Costs a few hundred thousand per server, it's a huge expense if you want it at your home but a rounding error for most organizations.

bottlepalm 3 hours ago [-]
You're buying what exactly for a few hundred thousand? and running what model on it? to support how many users? at what tps?
fooker 1 hours ago [-]
Not every use case is a cloud provider or tech giant.

Newer Blackwell does 200+ tokens per second on the largest models and tens of thousands on the smaller models. Most military applications require fast smaller models, I'd imagine.

Also, custom chips are reportedly approaching an order of magnitude more for the price. It's a matter of availability right now, but that will be solved at some point.

reactordev 4 hours ago [-]
I run local models on Mac studios and they are more than capable. Don’t spread fud.
bottlepalm 3 hours ago [-]
You're spreading fud. There's nothing you can run locally that's on par with the speed/intelligence of a SOTA model.
3836293648 3 hours ago [-]
You may be correct about the level of models you can actually run on consumer hardware, but it's not fud and you're being needlessly aggressive here.
msuniverse2026 4 hours ago [-]
I'd prefer something akin to the Biological Weapons Treaty which prohibits development, production and transfer. If you think it isn't possible you have to tell me why the bioweapons convention was successful and why it wouldn't be in the case of AI.
tgma 4 hours ago [-]
> bioweapons convention was successful

Was it successful? The jury is still out.

xpe 4 hours ago [-]
The point I would make: there are historical examples of international cooperation that work at least for some lengths of time. This is a good thing, a good tool to strive for, albeit difficult to reach.
Muromec 4 hours ago [-]
Because bioweapons suck, this is why. On the other hand AI sucks too, but it has at least some use
jrumbut 3 hours ago [-]
There might be a small percentage of people nihilistic enough to want to unleash a truly devastating bioweapon, but basically everyone wants what AI has to offer.

I think that's a key difference as well.

And how would a treaty like that be enforced? Every country has legitimate uses for GPUs, to make a rendering farm or simulations or do anything else involving matrix operations.

All of the technology involved, in more or less the configuration needed to make your own ChatGPT, is dual use.

smegger001 4 hours ago [-]
because bio-weapons labs take more to run than a workstation pc under your desk with a good graphics card. both in equipment material and training. Its hard to outlaw use of linear algebra and matrix multiplications.
aaronblohowiak 4 hours ago [-]
The last part of your post doesn’t necessarily follow or support your argument; the corollary is “It’s hard to outlaw rna”.
txrx0000 4 hours ago [-]
Don't compare general intelligence to bioweapons. A bioweapon cannot defend against or reverse the effects of another bioweapon.
drdeca 4 hours ago [-]
I don’t see why you think that AGI can reverse the effects of another AGI?
txrx0000 2 hours ago [-]
medi8r 4 hours ago [-]
Open Source here is not enough as hardware ownership matters. In an open source world, you and I cannot run the 10 trillion param model, but the data center controllers can.
txrx0000 4 hours ago [-]
I agree. We will need hardware ownership as well eventually. But the earlier you open-source, the more you slow down the centralization because people will be more likely to buy hardware to run stuff at home and that gives hardware companies an opening to do the right thing.
layer8 4 hours ago [-]
Sure, but we could have Hetzners and OVHs who just provide the compute for whatever model we want to run.
medi8r 3 hours ago [-]
Checked the DDR5 price lately?
layer8 2 hours ago [-]
I didn’t claim that it would be cheap. But I’d rather see the real cost of SOTA LLM use exposed. On the other hand, reportedly SOTA LLM inference is profitable nowadays, so it can’t be that expensive.
jefftk 4 hours ago [-]
A "world where millions of AGIs run on millions of gaming PCs, where each AI is aligned with an individual human" would be a world in which people could easily create humanity-ending bioweapons. I would love to live in a less vulnerable world, and am working full time to bring about such a world, but in the meantime what you describe would likely be a disaster.
m4rtink 2 hours ago [-]
I think it is much more likely they will be (and are) generating protorealistic images of ther favourite person (real or fictional) with cat ears. Never underestimate what adding cat ears does.

OK, maybe someone will build a bioweapon that does that for real. :P

txrx0000 3 hours ago [-]
There are plenty of physical and legal barriers to creating a bioweapon and that's not going to change if everyone becomes smarter with AI. And even if we really somehow end up in a world where everyone has a lab at home and people can easily create viruses, they can also easily create vaccines and anti-virals. The advancements in medicine will outpace bioweapons by a lot because most people are afraid of bioweapons.

Intelligence itself is not dangerous unless only a few orgs control it and it's aligned to those orgs' values rather than human values. The safety narrative is just "intelligence for me, but not for thee" in disguise.

jefftk 3 hours ago [-]
There mostly aren't physical barriers. Unlike nukes, where you need specific materials and equipment that we can try to keep tabs on, bioweapons can be made entirely with materials and equipment that would not be out of place in an academic or commercial lab. The largest limitation is knowledge, and the barriers there are falling quickly.

On your second point, see my response to oceanplexian below: https://news.ycombinator.com/item?id=47189385

oceanplexian 3 hours ago [-]
I’m tired of these bizarre hypothetical gotcha arguments. If AI can create bioweapons, it can equally create vaccines and antidotes to them.

We live in a free society. AI should be democratized like any other technology.

jefftk 3 hours ago [-]
Symmetry is not guaranteed. If someone creates a deadly pathogen with a long pre-symptomatic period (which we know is possible, since HIV works this way) it could infect essentially everyone before discovery. Yes, powerful AI would likely rapidly speed up the process of responding to the threat after detection, especially in designing countermeasures, but if we don't learn about the threat in time we lose.

There are people today who could create such a pathogen, but not many. Widespread access to powerful AI risks lowering the bar enough that we get overlap between "people who want to kill us all" and "people able to kill us all".

This is not a gotcha argument, this is what I work full time on preventing: https://naobservatory.org The world must be in a position to detect attacks early enough that they won't succeed, and we're not there yet.

txrx0000 3 hours ago [-]
For every person that thinks about creating the HIV-like deadly pathogen, there will be millions more thinking about how to defend people against such pathogen, how to detect it faster before symptoms arise, how to put up barriers to creating them, and possibly even how to modify our bodies to be naturally resilient to all similar pathogens. Just like what you're doing here. I don't think we should mark knowledge or intelligence itself as the problem. If that's true then we should be making everyone dumber.
jph00 3 hours ago [-]
In the alternative, asymmetry is guaranteed.

When you only allow gov and big tech access to powerful AI, you create a much more dangerous and unstable world.

dcre 3 hours ago [-]
This is just not thinking clearly. There are bad things that are asymmetric in character, dramatically easier to do than to mitigate. There’s no antidote or vaccine to nuclear weapons.
jph00 3 hours ago [-]
This is exactly the thinking that has characterized responses to new sources of power through history, and has been consistently used to excuse hoarding of that power. In the end, enlightenment thinking has largely won out in the western world, and society has prospered as a result.

Centralizing power is dangerous and leads to power struggles and instability.

txrx0000 3 hours ago [-]
It is not easy to create weapons. Why do you think the physical and legal barriers that exist today that prevent you from acquiring equipment and creating nuclear weapons will go away when everyone becomes smarter?
claudiojulio 4 hours ago [-]
If it's taken by force, it will stagnate. It makes no sense at all.
avaer 4 hours ago [-]
The logic used in the treats is that it's a national security risk to not use Claude, but it's also a national security risk to use Claude.

We shouldn't expect these people to consider how the logic breaks down one step ahead when it never made sense in the first place.

wahnfrieden 4 hours ago [-]
Is TikTok stagnating in the US?
pluc 4 hours ago [-]
When have US corporations (or simply "the US" really) ever done the right thing for humanity?
4bpp 4 hours ago [-]
"What have the Romans ever done for us?" (https://www.youtube.com/watch?v=Qc7HmhrgTuQ)
ted_dunning 2 hours ago [-]
Donating the first polio vaccine to humanity.

Funding the majority of HIV prevention in Africa.

The list is long, but you knew that.

no_wizard 4 hours ago [-]
This letter and all of this is meaningless.

If they actually wanted to do something they wouldn’t have sat back and funded Republican political campaigns because they were pissed about the head of the ftc under Biden.

But they didn’t. They gave millions to this guy and now they’re feigning ignorance or change ir wherever this is.

It’s meaningless. Utterly meaningless.

Get what you pay for, I suppose.

inkysigma 3 hours ago [-]
What are you talking about? Google employees and the corporation itself in particular overwhelmingly donated to the Harris campaign.

https://www.opensecrets.org/orgs/alphabet-inc/recipients?id=...

The corporation gave millions _after_ Trump had already won. If your criticism is that, then that does not apply to the people signing.

SpicyLemonZest 4 hours ago [-]
We shouldn't be scammed by people who intend to get back on the Trump train once they've gotten what they want. But if someone's willing to openly oppose the Trump regime, even out of self-interest, I'm happy to let them feign as much ignorance as they'd like. If his power isn't broken the details of who resisted him when won't matter.
5o1ecist 4 hours ago [-]
They control the compute.
xpe 4 hours ago [-]
> This is why you can't gatekeep AI capabilities. They will eventually be taken from you by force.

Some form of US AI lab nationalization is possible, but it hasn't happened yet. We'll see. Nationalization can take different forms, not to mention various arrangements well short of it.

I interpret the comment above as a normative claim (what should happen). It implies the nationalization threat forces the decision by the AI labs. No. I will grant it influences, in the sense that AI labs have to account for it.

tomcam 1 hours ago [-]
Please take this question at face value. I tend to be slightly pro defense department in this context, but it is not a strongly held belief.

What I have known is that since its very inception, Google has been doing massive amounts of business with the war department. What makes this particular contract different? I really am trying to understand why these sentiments now.

anigbrowl 36 minutes ago [-]
It's a clear enough moral issue that whichever side of it you end up on is likely to have life-shaping consequences 5 or 10 years down the line. It's predictable that there will be domestic or international conflict with a high cost in lives and political coherence over that timescale, and being someone who 'was in AI' at a government scale vendor is qualitatively different from being a database admin o font designer or UX specialist.

Substantively, individual employees of these firms may have little or no actual impact on this. But AI is ubiquitous enough and disruptive enough that being professionally connected with it at a time of great geopolitical instability has the potential to be a very very bad look later.

codepoet80 5 hours ago [-]
Nicely done. Hold this line — there’s got to be one somewhere.
_aavaa_ 4 hours ago [-]
Yes, take disparate sets of employees and like, oh idk unionize while you still have power.
culi 3 hours ago [-]
Actions like these often lead to unions. Look into the history of how the Kickstarter union came to be.

It often starts as collective action in response to a blatant disregard for the values of the workers

mitch-flindell 4 hours ago [-]
The primary purpose of these products is mass surveillance why else would they be allowed to be built ?
Quarrel 1 hours ago [-]
I know it is a serious topic, but before I clicked on it, I assumed this was going to be about Prime numbers...

Maybe it can get reused after this stuff is over.

mortsnort 2 hours ago [-]
Kneecapping the country's best AI lab seems like a bad way to win at the cyber.
rayiner 2 hours ago [-]
This seems squarely within the purpose of the Defense Production Act: https://en.wikipedia.org/wiki/Defense_Production_Act_of_1950

"Title I authorizes the President to identify specific goods as 'critical and strategic' and to require private businesses to accept and prioritize contracts for these materials."

If you invented a new kind of power source, and the government determined that it could be used to efficiently kill enemies, the government could force you to provide the product to them under the DPA. Why should AI companies get an exemption to that?

yed 2 hours ago [-]
Well, for one, they haven’t invoked the Defense Production Act.
rayiner 1 hours ago [-]
The very first point on the website is: “The Department of War is threatening to … Invoke the Defense Production Act.”
mythz 3 hours ago [-]
These 2 Exceptions shouldn't have to be disputed.

At this point I'd go far to say I wouldn't trust any company with my AI history that caves to DoD demands for mass domestic surveillance or fully autonomous weapons.

Your AI will know more about you than any other company, not going to be trusting that to anyone who trades ethics for profits.

driverdan 3 hours ago [-]
This is a nice gesture but completely meaningless. There is absolutely no commitment in this. "We hope our leaders.." has no conditions, no effects.

If you're an employee and actually believe in this you need to commit to something, like resigning.

culi 3 hours ago [-]
it's the first step towards actually organizing. Reminds me of how the Kickstarter union came to be

Any collective action should be encouraged

siliconc0w 2 hours ago [-]
We need key AI researchers at these companies to speak out - execs will not care otherwise. If Jeff Dean made this a red line, it might matter.
AdieuToLogic 2 hours ago [-]
> We need key AI researchers at these companies to speak out ...

See this[0] article from Business Insider dated 2026-02-16 titled:

  The art of the squeal

  What we can learn from the flood of AI resignation letters
And containing:

  This past week brought several additions to the annals of 
  "Why I quit this incredibly valuable company working on 
  bleeding-edge tech" letters, including from researchers at 
  xAI and an op-ed in The New York Times from a departing 
  OpenAI researcher. Perhaps the most unusual was by Mrinank 
  Sharma, who was put in charge of Anthropic's Safeguards 
  Research Team a year ago, and who announced his departure 
  from what is often considered the more safety-minded of the 
  leading AI startups.
0 - https://www.businessinsider.com/resignation-letters-quit-ope...
snickerbockers 3 hours ago [-]
>We are the employees of Google and OpenAI, two of the top AI companies in the world.

Does this mean you dipshits are going to stop your own domestic surveillance programs? You sold your souls to the devil decades ago, don't pretend like you have principles now.

bcooke 5 hours ago [-]
I'd love to see this extended to any American regardless of past/present employment with Google or OpenAI
general_reveal 4 hours ago [-]
Would you like to see this extended globally? Could such a spirit exist multinationally? It’s asking a lot, because you’d be asking for a lot of courage from places like China, India, Russia, Middle East … anywhere that’s not Europe basically.
bcooke 3 hours ago [-]
Well yes, but context matters here and this is the US government's decision to take with a US-based company.

While I understand why it matters for folks affiliated with prominent AI companies in particular to sign this, the more the American people stand together, the more pressure I think that puts on our government to act responsibly.

Idealistic and naive? Probably. But sometimes grassroots efforts do spark change, and it's high time the people of the USA start living up to the first word in our country's name.

Anyways, to answer your question directly: I welcome all the fine people of the world everywhere to join in what this open letter stands for.

Unfortunately, it's abundantly clear to many of us Americans that the current administration doesn't care what we think, never mind what people outside our country do. So I'll just start with the group that this department (in theory) is supposed to represent.

focusgroup0 3 hours ago [-]
> domestic mass surveillance and autonomously killing people without human oversight

spoiler alert: this is already happening

do labs in China have a choice in the matter?

ipaddr 2 hours ago [-]
And people were wondering how OpenAI will find profitability.
2 hours ago [-]
theahura 1 hours ago [-]
OpenAI is nothing without its people
chkaloon 35 minutes ago [-]
Too late
bottlepalm 4 hours ago [-]
We all knew AI had the potential to be extremely powerful, and we all perused it anyways. What did we think would happen? The government/military always takes control of the most powerful/dangerous systems. If you work for a defense contractor or under ITAR then you already know this.

The right way to deal with this is political - corporate campaign contributions and lobbying. You're not going to be able to fight the military if they think they need something for national security.

3 hours ago [-]
himata4113 4 hours ago [-]
Does this mean there is a non zero chance we will get some kind of grok+chinese model mix that's used across the entire US military? Ironic isn't it.
charcircuit 3 hours ago [-]
Imagine if a gun manufacturer sold a gun that you couldn't use against X or Y country. Private companies imposing such demands on our military should not be respected. Having weapons that can randomly detect a false positive and shut themselves down because they think you are using it wrong is a feature I would never want built in.

I have also been against these terms of services of restricting usage of AI models. It is ridiculous that these private companies get to dictate what I can or can't do with the tools. No other tools work like this. Every other tools is going to be governed by the legal system which the people of the country have established.

dlev_pika 3 hours ago [-]
It sounds like you think that Anthropic is the first company regulating the use of their product. This is not a novelty whatsoever.
charcircuit 3 hours ago [-]
No, but I find it obnoxious as an end user.
Esophagus4 2 hours ago [-]
Then don’t create a mass surveillance program on Americans and you shouldn’t have to worry about it ;)
charcircuit 1 hours ago [-]
Have you not read the Usage Policy that regular people have to follow? For example, you are not allowed to use their API to automatically summarize your blog post and share the link on X as you are not allowed to make posts automatically.
hparadiz 3 hours ago [-]
These models will be able to run on a machine in your pocket locally within a few decades.
bcooke 3 hours ago [-]
Taking principled stands should absolutely be respected.
charcircuit 3 hours ago [-]
I can respect a stance while simultaneously calling out how much I dislike it.
WorkerBee28474 3 hours ago [-]
> Imagine if a gun manufacturer sold a gun that you couldn't use against X or Y country

That kind of happens with F35s that the US sells to its allies.

joshuamorton 3 hours ago [-]
> Imagine if a gun manufacturer sold a gun that you couldn't use against X or Y country.

The point here, of course, being that Anthropic is very specifically claiming to not be a gun manufacturer, and Hegseth's response is that the DoD (W?) will force anthropic to build guns.

MattDaEskimo 4 hours ago [-]
This was a brave, heartwarming read. Thank you to the teams
4 hours ago [-]
trinsic2 4 hours ago [-]
I missing the actual letter. I think that part of the content is hidden behind some javascript. Can someone post it.
mftb 4 hours ago [-]
Stand your ground.
verdverm 4 hours ago [-]
Don't tread on me
krapp 4 hours ago [-]
Ironically the flag flown mostly by the people who voted for this tyranny.

They should reprint it to say "Step on me Daddy."

verdverm 4 hours ago [-]
There's a good one going around with the Anthropic logo replacing the snake

https://bsky.app/profile/verdverm.com/post/3mfuuogxjpk2b

abhijitr 3 hours ago [-]
The book "On Tyranny: 20 lessons from the 20th century" by the historian Timothy Snyder is an excellent read for these times. The very first lesson is "Do not obey in advance". It's about how authoritarian power often doesn't need to force compliance, people simply bend the knee in anticipation of being forced. This simply emboldens the authoritarians to go further.

I've been disappointed to see many businesses and institutions obeying in advance recently. I hope this moment wakes up the tech community and beyond.

spuz 4 hours ago [-]
They should be collecting signatures from employees at xAI. I think they're probably most likely to fill the space left by Anthropic.
dalemhurley 4 hours ago [-]
XAI has already announced they are 100% in

https://x.ai/news/us-gov-dept-of-war

spuz 4 hours ago [-]
All the more reason to collect their employees' signatures.
aeon_ai 4 hours ago [-]
This kind of screams desperation, but I guess that's what happens when you're niche AI.
verdverm 4 hours ago [-]
niche is a polite way to put it
actionfromafar 4 hours ago [-]
Bot-ique Mechahitler.
ocdtrekkie 4 hours ago [-]
Everyone knows anyone who signs this from xAI will be a former employee by tomorrow.
dalemhurley 4 hours ago [-]
My guess is their HR is already monitoring it with instant termination processes in place.
spuz 4 hours ago [-]
You can sign the form anonymously.
ocdtrekkie 4 hours ago [-]
Both the automated verification methods depend on Google servers and Google can almost certainly retrieve that data if they want to regardless of if the signers or verifiers delete it.
ocdtrekkie 4 hours ago [-]
You're assuming a lot about Elon's ability to assemble and execute a process competently. They will probably end up hiring people off this list and firing them later.

I think what is much more interesting is what OpenAI and Google will do. There's probably some threshold of signatories where the companies in question do not fire everyone when they decide they want the DoD's business, the question will be how many people have to sign to cross it... and will enough people sign.

I don't think Google would bat an eye at firing 500 people to secure a DoD contract, but would they fire 5,000?

xvector 4 hours ago [-]
There is a specific kind of person that joins xAI over the other companies and it is definitely not a moral one.
belter 4 hours ago [-]
[dead]
PostOnce 4 hours ago [-]
My take is that none of the AI companies really care (companies can't care), they just realize that if they go down that road, public opinion will be so vehemently against AI in all forms that it will be regulated out of viability by the electorate.

Also, if AI exists, AI will be used for war. The AI company employees are kidding themselves if they think otherwise, and yet they are still building it (as opposed to resigning and working on something else), because in the end, money is the only true God in this world.

zugi 4 hours ago [-]
Anthropic does not object to its use for war. In fact Anthropic explicitly allows its semi-autonomous use in war, e.g. for identifying targets. They just won't permit its use for full autonomous war, yet, because they don't believe it's safe enough.
PostOnce 3 hours ago [-]
Since when has war been waged according to the whim of a corporation?

The tools will be used however the government wants them to be used. The government makes the laws and wages the wars, and the corporation will follow the law whether it wants to or not.

So either you are willing to work on a tool that is not under your control, or you are not.

nxm 3 hours ago [-]
I'm sure China doesn't care it's not safe... and there's the issue
anonnon 2 hours ago [-]
> Signed,

The people who:

> steal any bit of code you put on the internet regardless of the license you use or its terms, then use it to train their models, then turn around and try to sell it to you

> made it so you can't afford new, more powerful computers or smartphones anymore, or perhaps even just replacements for the ones you already have, thanks to massive GPU, DRAM, SSD, and now even HDD shortages

> flood the internet with artificial, superficial content

> aggressively DDoS your website

Real pillars of society.

hakrgrl 2 hours ago [-]
How cute they bought a domain and everything
lazzlazzlazz 56 minutes ago [-]
The signatories of this site are leaping at a misguided opportunity for moral credit, when the reality is that they're getting whipped into a left-leaning frenzy.

As Undersecretary Jeremy Lewin clarified today[1], these weighty decisions should not be made by activists inside companies, but made by laws and legitimate government.

[1]: https://x.com/UnderSecretaryF/status/2027594072811098230

dmix 4 hours ago [-]
Not using Claude only weakens the state. Just don’t oblige
csneeky 1 hours ago [-]
Claude is better for much than GPT atm. You really think the government is going to hamstring the engineering of weapons and intelligence capabilities by not using it?
ripped_britches 3 hours ago [-]
No surprise to have not heard anything from xAI
yayr 20 hours ago [-]
It's good that there are still empathic humans in the decision and build chain when it comes to AI systems...
wosined 18 hours ago [-]
[flagged]
dang 4 hours ago [-]
Personal attacks aren't allowed here.

Perhaps you don't owe AI tycoons whose names start with A better, but you owe this community better if you're participating in it.

https://news.ycombinator.com/newsguidelines.html

mrcwinn 4 hours ago [-]
I see comments like this all the time on HN, including between community members. Why are you showing up now? Altman may be former YC and friends with Paul Graham, but he’s nevertheless a public figure and does plenty to deserve ridicule.

Are we allowed, for example, to call Trump an insecure man with orange skin and tiny hands? Is that a violation of our allowed speech?

hedayet 17 minutes ago [-]
Altman is also on Paul Graham's legendary founders list. I hope that clears up a thing or two.
dang 2 hours ago [-]
> I see comments like this all the time on HN, including between community members

That's bad, and I'd like to see links to those.

> Why are you showing up now?

If you mean why do I respond to post A but not B, the answer is usually that I saw A but didn't see B. We don't come close to seeing everything that gets posted to HN—there's far too much. If you see a post that ought to have been moderated but hasn't been, the likeliest explanation is that we didn't see it. You can help by flagging it or emailing us at hn@ycombinator.com (https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...).

> Are we allowed, for example, to call Trump an insecure man with orange skin and tiny hands?

That's certainly a cliché, and it's hard to see how repetition of tropes fits with the intellectual curiosity that we're optimizing for (or rather, trying to! - https://hn.algolia.com/?dateRange=all&page=0&prefix=true&sor...). As I've said in the past, curiosity withers under repetition and fries under indignation (https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...).

I think, though, that the issue with a political cliché is rather different than posting that someone "doesn't look human".

17 hours ago [-]
anigbrowl 1 hours ago [-]
We hope our leaders will put aside their differences and stand together to continue to refuse the Department of War's current demands for permission to use our models for domestic mass surveillance and autonomously killing people without human oversight.

[90 minutes later]

Ah! Well, nevertheless

OK, this is a cheap shot on my part. But still: we hope? What kind of milquetoast martyrdom is this? Nobody gives a shit about tech workers as living, breathing, human moral agents. You (a putative moral actor signed onto this worthy undertaking) might be a person of deep feeling and high principle, and I sincerely admire you for that. But to the world at large, you're an effete button pusher who gets paid mid-six figures to automate society in accordance with billionaires' preferences and your expressions of social piety are about as meaningful as changing the flowers in the window box high up on the side of an ivory tower. The fact that ~80% of the signatories are anonymous only reinforces this perception.

If you want this to be more than a futile gesture followed by structural regret while you actively or passively contribute to whatever technologically-accelerated Bad Things come to pass in the near and medium term, a large proportion of you (> 500/648 current signatories) need to follow through and resign over the weekend. Doing so likely won't have that much direct impact, but it will slow things down a little (for the corporate and governmental bad actors who will find deployment of the new tech a little bit harder) and accelerate opposition a little (market price adjustments of elevated risk, increased debate and public rejection of the militaristic use of AI).

Hope, like other noble feelings, doesn't change anything. Actions, however poorly coordinated and incoherent, change things a little. If your principles are to have meaning, act on them during the short window of attention you have available.

gcanyon 4 hours ago [-]
No problem! The DoD^HW will just use DeepSeek!

(I wish this were a joke)

JshWright 4 hours ago [-]
The legal name of the department is still the Department of Defense. The "Department of War" is a preferred name by the administration.
k12sosse 3 hours ago [-]
Identity affirming care now includes avoiding the DODs deadname. What a world.
dang 2 hours ago [-]
Could you please stop posting unsubstantive comments and flamebait? You've unfortunately been doing it repeatedly. It's not what this site is for, and destroys what it is for.

If you wouldn't mind reviewing https://news.ycombinator.com/newsguidelines.html and taking the intended spirit of the site more to heart, we'd be grateful.

dryarzeg 4 hours ago [-]
They've already been using Signal - which is "commercial" app, meaning it's not meant to be used like that - for top-secret (or at least highly sensitive) military communications during the military strikes on Yemen. If that was fake, I apologise, I was deceived. I wouldn't be surprised if things turned out that way again, to be honest. That's something to be expected, actually (IMO).
verdverm 4 hours ago [-]
Aren't they using the Israeli version of Signal which backs up messages because the law requires it?

Pretty sure I remember that from the fumble

dalemhurley 4 hours ago [-]
They are after the models without post training guardrails.
dluan 2 hours ago [-]
oops turns out you will all be divided
jfengel 12 hours ago [-]
Good luck with that. I just don't see either Google or OpenAI listening to their employees on this. They might have their own reasons for not wanting to help build Skynet, but if they don't, I'm sure those employees can readily be replaced with somebody more compliant.
senderista 1 hours ago [-]
"We hope our leaders will put aside their differences and stand together"
nemo44x 2 hours ago [-]
Correct. You will not be divided. You will likely be subtracted.
ChrisArchitect 2 hours ago [-]
love2read 4 hours ago [-]
How is posting on this website with your full name not career suicide?
ceroxylon 3 hours ago [-]
That's what taking a stand looks like... if any of these employees lose their job, they are welcome to come crash at my place for as long as they would like; they will have a roof over their head and I will cook them 3 meals a day.
Sivart13 3 hours ago [-]
Not all tech employers are total weenies who would refuse to hire someone for taking this stance.

Most are, but not all.

3 hours ago [-]
qup 4 hours ago [-]
Hegseth shared a Trump tweet a few hours ago saying they're going to quit doing business with Anthropic.

https://x.com/i/status/2027487514395832410

yoyohello13 5 hours ago [-]
I hope Anthropic will survive this. If they don’t it will just be perfect proof that you cannot be both moral and successful in the US.
gslepak 4 hours ago [-]
Who cares whether the "company" survives? I've seen this movie. A few of them in fact. We're on the chopping block here, lol.
collinmcnulty 4 hours ago [-]
We should care because if they win they empower others to stand up as well, and not just in the area of AI safety. Courage is contagious, and whatever else you think of Anthropic, they’re showing real courage here.
gslepak 2 hours ago [-]
I'm not debating whether or not they're being courageous. I'm referring to self-preservation, this is a natural instinct that should be in all people. Have you seen T2? District 9? The Matrix? And a few others I could mention.
dakolli 3 hours ago [-]
Yeah, I find it funny how we're now defending these AI companies, when they're clearly still an enemy of the working class.

They've made it incredibly clear their plans are to disenfranchise labor, and welcome in a world of God knows what with their technologies. Like they're making a stand on mass surveillance, this seems a bit like a red herring, cool they stop using their tools for war fighting, but continue to attack their fellow working working class?

All three of these companies are spending hundreds of millions to psyop decision makers across every industry to give your salary to them. Get out of here, with "We will not be divided" OpenAI, Google and Anthropic employees are not friends of labor and should not use our phrases.. or they'd sabotage and or quit.

And why is there no mention of how we caught OpenAI being used in government dashboards through Persona, only two weeks ago, that were directly connected to intelligence organizations and tools to identify if you are politician or high profile personds? OpenAI has been complicit in this since last January when 4o was the first model that qualified for "top secret operations"

(kind of weird how 4o went onto cause a bunch of people to go literally insane and commit crazy acts of violence yet is allowed to be used in the most sensitive aspects of government.. nothing to see here).

hax0ron3 3 hours ago [-]
If the AI companies and the current administration are both enemies of the working class - I am not necessarily saying that they are, but for the sake of argument let's say that they are - then it probably makes strategic sense for the working class to encourage them to fight each other while supporting the side that is less dangerous. Which side is less dangerous to the working class, I do not know. My point is that there's not necessarily any strategic contradiction between defending the AI companies and supporting the working class.
c1c3r0 3 hours ago [-]
I look at specific actions in context. What Anthropic did today was amazing in my eyes for reasons that are widely held and stated clearly by Anthropic.

At the same time, I might gesture at other actions they’ve done that fall short. This is not inconsistent; this is simply acknowledging miltidimensionality.

dakolli 2 hours ago [-]
Or its just incredible marketing.. I don't really care about what LLMs do in a military context, they'd probably make a military less effective which is good in my opinion. I find it a pretty silly notion to use them outside of maybe signals intelligence, seems actually dumb as hell to use them for targeting. Other types of ML models in a military context worry me far more than neural network powered autocomplete.

I think we should worry way more about Anthropic's attack on the working class, Dario has been very clear those intentions, and we shouldn't be patting them on the back. We should be boycotting all of these companies that say [insert computer i/o career] is dead .

If you must use Think For Me SaaS use an Open Source model.

fourthark 4 hours ago [-]
Most survive by bending. See e.g. Google and surveillance a decade ago.
4 hours ago [-]
Esophagus4 2 hours ago [-]
From a revenue perspective I think they’ll be fine, right? Weren’t the value of the govt contracts $200m out of like $14b revenue?

Assuming the govt doesn’t take other crazy measures to punish them.

Aurornis 4 hours ago [-]
Anthropic has enough investment money and enough additional investor interest that they can ride this out longer than this administration. It won’t be good for business, of course, but it’s not the end of their world.

> it will just be perfect proof that you cannot be both moral and successful in the US.

I hate this situation as much as anyone, but it’s a unique, first of its kind challenge. I don’t think it’s generalizable to anything. This is a unique situation.

voidfunc 4 hours ago [-]
The only way they survive is if their board fires the CEO and they bend the knee. The other option is they are given the green light to sell to one of the US Governments trusted partners: Microsoft/Oracle/X.
jcgrillo 4 hours ago [-]
Either way, the bribes will flow like wine, the message has been sent loud and clear
belter 4 hours ago [-]
>> you cannot be both moral and successful in the US.

I assumed the use of massive scraped datasets, with copyrighted material and without consent, to train large AI models, had already established this.

drdeca 4 hours ago [-]
Many people don’t think there is a moral case against training a model on copyrighted data without obtaining a license to do that specifically.
bko 4 hours ago [-]
[flagged]
TehCorwiz 4 hours ago [-]
This conflict has zero to do with AI in the grand scheme of things. We had a whole supreme court case about refusing service to customers. Remember that? Private companies can choose which customers to service. And let's be clear about what's being sold. It's not a product that changes hands, it's a service provided continually. And as anyone except the enlisted military troops can, said vendor can choose which efforts to help with. If what the government wants is so onerous as to find no vendors to offer it then that says something doesn't it?
engineer_22 4 hours ago [-]
Plenty of precedent for seizing private property for national defence. The list is long and growing.
TehCorwiz 4 hours ago [-]
Citation please.
engineer_22 4 hours ago [-]
Selective Service System is evidence enough of the government's power to oblige participation in defence.... But if you're interested...

https://scholarship.law.cornell.edu/cgi/viewcontent.cgi?arti...

TehCorwiz 4 hours ago [-]
Selective service activation, I.E. a draft. Requires an act of Congress. When did they enact a bill to draft Anthropic?
engineer_22 3 hours ago [-]
https://en.wikipedia.org/wiki/Defense_Production_Act_of_1950

Great article, it has a list of times it's been used to compel cooperation.

TehCorwiz 53 minutes ago [-]
Ok. So what's the emergency prompting them to take control of Anthropic?

Further, why would they also accuse them of being a national security threat in the same breath? Seems like if they're a threat they're also not someone you want working on national security. Especially under duress. That feels like a bad combination.

toraway 3 hours ago [-]
That link is specifically discussing actions the government takes in war. Like, a real, ongoing, war where it's accepted extraordinary actions may be necessary that conflict with peacetime rights to private property (it was written during World War 2).
GenerWork 4 hours ago [-]
Which case is that?
bmelton 4 hours ago [-]
Masterpiece Cakeshop v Colorado

https://www.oyez.org/cases/2017/16-111

4 hours ago [-]
magicalist 4 hours ago [-]
This reads a whole lot like the government gets to make you do whatever it wants because the president was elected?

Freedom!

That's great that responsibility for offensive decisions ultimately lie with the civilian leaders of the military, but that does not give them the right to compel behavior from private citizens under threat of the government obliterating them.

_bohm 4 hours ago [-]
This opinion coming from one of the most compromised people possible on this issue, lol.
adampunk 4 hours ago [-]
"Seemingly innocuous terms from the latter like "You cannot target innocent civilians" are actually moral minefields that lever differences of cultural tradition into massive control."

GEEEEE, I wonder who the bad guys are here.

bko 4 hours ago [-]
Let me introduce you to the Democratic People's Republic of Korea
adampunk 4 hours ago [-]
Oooh, scary. Did they shoot Renee Good?
nxobject 4 hours ago [-]
Good grief - we happen to have a free market with multiple suppliers. But a defense contractor in deep with the current administration’s ideology might have a hard time remembering that.
preommr 4 hours ago [-]
I agree with Palmer that Corporations shouldn't control governments.

But that's not what this is about. The US government is free to not use Anthropic's services.

The problem is that the government is using bullying tactics against a company excercising it's rights to not sell. Especially if they actually designate Anthropic as a supply chain risk - not only is that threat absolutely ridiculous, but actually doing so should be 9/10 on the danger scale.

WTF is even happening anymore? How did we get here that this is even up for debate???

harmmonica 4 hours ago [-]
A lot of words and somehow still missing the point. This is a pretty straightforward question: should the US government be able to force a company to do business with it based the government's unilateral terms? I think the answer to that ought to be no, they should not be forced. And there's no other discussion to have.

You can discuss whether a corporation is violating some law, and punish them if they are, but I don't think jumping from "corporation doesn't want to do business with the gov" to "corporation is a national security risk" makes any sense.

What a fuckin' joke.

4 hours ago [-]
SpicyLemonZest 4 hours ago [-]
Palmer Luckey is excusing the inexcusable for treats from the regime. If the regime gets away with this, when constitutional government is restored, I will be petitioning my congresspeople to destroy Anduril in retaliation.
renewiltord 4 hours ago [-]
None of this is relevant. They’re saying “our stuff can’t be used effectively for X but you can use it for Y”. It’s like if someone was saying “dude the o ring is going to fail on the shuttle launch” and you respond “if we have random people permitted to stop the shuttle launch every time we will never get off the ground”.

The rhetorical technique of generalizing a specific constraint is very effective in the peanut gallery but hopefully we don’t want our shuttles to blow up.

SilverElfin 4 hours ago [-]
From Palmer Luckey who worshipped Trump as a teenager? Who has billions in contracts due to his sycophancy? Just like Joe Lonsdale and Peter Thiel? Yea his opinion is irrelevant.
4 hours ago [-]
mindslight 4 hours ago [-]
> Should our military be regulated by our elected leaders

Utterly fallacious. Trump is not a leader, rather he is a divider. Nor was he elected to act as a dictator unbeholden to the Constitution or the courts. Corporate control is indeed terrible, but autocratic authoritarianism is worse. This gradient is shown by how it is only the rare company trying to impart some kind of restraint which is being taken to task.

It's also pretty amazing how no matter which societal institution we try to invoke to put the brakes on the fascists, we're invariably told that the "proper approach" is actually something else, usually settling on simply waiting for an election, some time down the road, maybe. Are we supposed to believe that elections are the only institution our society has? The fascists won a single election, and so we're told that supposedly serves as a mandate for doing whatever they'd like to our country for the next four years? Yeah, no, fuck off.

gjsman-1000 4 hours ago [-]
[flagged]
renewiltord 3 hours ago [-]
Well, it looks like OpenAI will be working with the Pentagon: https://www.axios.com/2026/02/27/pentagon-openai-safety-red-...

My personal guess is that Sam Altman said he'd let policy violations go without a complaint and Dario Amodei said he wouldn't.

Esophagus4 2 hours ago [-]
Shame. Although I guess Altman has now fully given up the “for the good of humanity” schtick.
verisimi 41 minutes ago [-]
It's great that people are taking a moral position re their work, and are seemingly prepared to take a bit of a risk in expressing themselves.

However, if we're honest, Google has a long history of selling 'the people' out on domestic surveillance. There is even a good argument that this is what it was created for in the first place, given it was seeded with money from inqtel, the CIA venture capital fund. So, while I commend acting with your conscience in this (rather minor) case, and I'm glad to see people attempt to draw a line somewhere, what will this really come to? I strongly suspect this is event itself is just theater for the masses, where corporates and their employees get to stand up to government (yay!). The reality is probably all that is being complained about, and far worse, has been going on for years.

How far would these signatories go? Would they be prepared to walk away from all that money? Will they stop the rest of the dystopian coding/legislation writing, or is that stuff still ok (not that evil)?

Ultimately, is gaining the money worth the loss of one's soul? If you know better, and know that it is wrong to assist corporations and governments in cleaving people open for profit and control, but do it anyway for the house, private schools, holidays, Ferrari, only taking a stand in these performative, semi-sanctioned events - is this really the standard you accept for yourself? If so, then no problem. If not, what exactly are you doing the rest of the time? Are you able to switch your morality/heart/soul off? Judge yourself. If you find you are not acting in accord with yourself, everything is already lost.

politician 1 hours ago [-]
I simply do not understand why Americans tech companies and their employees will hew and cry about supporting the military. For those of you who support their position, have you ever stopped to consider that your safe, comfortable lives of free speech and protests and TikTok and food and gas and Amazon Next-Day deliveries is enabled by a massive nuclear deterrent operated by the very military you oppose?

It is just so disappointing to come here and read these naive takes. Yes, Anthropic should be compelled to support the military using the DPA if necessary.

dingi 57 minutes ago [-]
It really shows how far the HN crowd is from reality.
blaze998 3 hours ago [-]
December 14, 2024

>After famed investor Marc Andreessen met with government officials about the future of tech last May, he was “very scared” and described the meetings as “absolutely horrifying.” These meetings played a key role on why he endorsed Trump, he told journalist Bari Weiss this week on her podcast.

>What scared him most was what some said about the government’s role in AI, and what he described as a young staff who were “radicalized” and “out for blood” and whose policy ideas would be “damaging” to his and Silicon Valley’s interests.

>He walked away believing they endorsed having the government control AI to the point of being market makers, allowing only a couple of companies who cooperated with the government to thrive. He felt they discouraged his investments in AI. “They actually said flat out to us, ‘don't do AI startups like, don't fund AI startups,” he said.

...

keep making petitions, watch the whole thing burn to the ground when Trump decides to channel the Biden ideas in this field.

moogly 3 hours ago [-]
We have international laws and rules of war. We have weapon treaties (well, some of them are expiring). Sure, not everyone is signatory, or even follow the conventions they have ratified, but at least having these things in place makes it even remotely possible to categorize and document violations and start processes towards rulebreakers and antihumanist actions.

So I looked into what they cooked up in 2023, plus which countries signed it (scroll down to a link to the actual text). It's an extraordinarily pathetic text. Insulting even.

https://www.state.gov/bureau-of-arms-control-deterrence-and-...

nilespotter 1 hours ago [-]
These models are weapons whether the frontier provider founders and their trite and lofty mission statements like it or not.

Private individuals and private companies do not get to create a defensive weapon with unprecedented power in a new category in the US and not share it with the US military.

You guys are batshit insane.

verdverm 4 hours ago [-]
Use the feedback forms within their platforms to let the companies know your thoughts
lovich 4 hours ago [-]
You’re kinda already conceding to some of your opponents points when you use legally invalid names like “Department of War”

I appreciate the sentiment but don’t preconcede to your opposition by using their framing.

uniq7 4 hours ago [-]
In this case I think the opponents made a huge mistake by calling themselves Department of War, and it's something that can be exploited.

Department of Defense was the actual lie, the newspeak term. They were not really defending anything, they were using military power globally for pursuing economic interests. However, it was easy to convince people that the whole endeavor was a good thing, because defending your country against the baddies is good, and you should support anyone doing that (otherwise you'd be a traitor!). Thank you for your service (defending us).

On the other hand, the term Department of War is hard to sell, because most people don't want to participate in a war or support someone who wants to start one. Thank you for your service... invading other countries? killing and raping innocents? ransacking resources?

This is an irrelevant detail, but if I'd read the title "Department of Defense vs. Meta", I'd first think Meta is leaking confidential info to other countries. However, if I'd read "Department of War vs. Meta", I'd think Meta doesn't want to promote an unnecessary war.

Vaslo 2 hours ago [-]
"Legally Invalid" lol - what?
mulmen 4 hours ago [-]
I'm disappointed Anthropic made this mistake as well.
greenranger 4 hours ago [-]
[dead]
mrcwinn 2 hours ago [-]
OpenAI employees lol.

You’ve lost utterly and completely. Even if you, as an individual, are a good person.

nullbyte 5 hours ago [-]
"He will not divide us!"
leonflexo 4 hours ago [-]
What's that, a little speaker?
nom 4 hours ago [-]
I miss those times :(
xeonmc 4 hours ago [-]
Club Penguin was a gem. Now all we get are Roblox.
alfiedotwtf 2 hours ago [-]
It would be funny in the end if the only ones left to not say no to Trump were Alibaba
krautburglar 3 hours ago [-]
You have 1) stolen everybody's shit and put it behind a paywall, 2) cornered the hardware market in some RICO-worthy offensive that has priced one of the few affordable pasttimes for young people out of reach, 3) changed your climate story (lie) on a dime, and started putting the horrible power-guzzling data centers on any strip of land within spitting distance of a power plant. I hope you all go out of business, and I hope it happens French Revolution style.

Of course they were going to use it for military purposes you spiritual abortions, and there is nothing your keyboard-soft hands can do about it.

duped 3 hours ago [-]
The Department of War doesn't exist, don't meet the fascists on their own terms at any level. They don't debate or operate in good faith.
jackblemming 4 hours ago [-]
So big tech wants to court Trump with millions in donations and now that the big bully they supported is bullying them.. we’re supposed to feel some kind of sympathy? Am I missing something here? Why did Anthropic get involved with the military in the first place?
fzeroracer 4 hours ago [-]
It's rather amusing that this is the proverbial 'red line', not y'know, everything else this administration has been tearing up and running roughshod over. Maybe this would've been less of an issue if companies were more proactive about this bullshit in the first place?

That's why it's hard for me to feel bad about companies suddenly finding themselves on the receiving end. They dug their grave inch by inch and are suddenly surprised when they get shoved into it.

remarkEon 3 hours ago [-]
This whole episode is very bizarre.

Anthropic appears to be situating themselves where they are set up as the "ethical AI" in the mindspace of, well, anyone paying attention. But I am still trying to figure out where exactly Hegseth, or anyone in DoW, asked Anthropic to conduct illegal domestic spying or launch a system that removes HITL kill chains. Is this all just some big hypothetical that we're all debating (hallucinating)? This[1] appears to be the memo that may (or may not) have caused Hagesth and Dario to go at each other so hard, presumably over this paragraph:

>Clarifying "Responsible Al" at the DoW - Out with Utopian Idealism, In with Hard-Nosed Realism. Diversity, Equity, and Inclusion and social ideology have no place in the DoW, so we must not employ AI models which incorporate ideological "tuning" that interferes with their ability to provide objectively truthful responses to user prompts. The Department must also utilize models free from usage policy constraints that may limit lawful military applications. Therefore, I direct the CDAO to establish benchmarks for model objectivity as a primary procurement criterion within 90 days, and I direct the Under Secretary of War for Acquisition and Sustainment to incorporate standard "any lawful use" language into any DoW contract through which AI services are procured within 180 days. I also direct the CDAO to.ensure all existing AI policy guidance at the Department aligns with the directives laid out in this memorandum.

So, the "any lawful use" language makes me think that Dario et al have a basket of uses in their minds that they feel should be illegal, but are not currently, and they want to condition further participation in this defense program on not being required to engage in such activity that they deem ought be illegal.

It is no surprise that the government is reacting poorly to this. Without commenting on the ethics of AI-enabled surveillance or non-HITL kill chains, which are fraught, I understand why a department of government charged with making war is uninterested in debating this as terms of the contract itself. Perhaps the best place for that is Congress (good luck), but to remind: the adversary that these people are all thinking about here is PRC, who does not give a single shit about anyone's feelings on whether it's ethical or not to allow a drone system to drop ordinance on it's own.

[1] https://media.defense.gov/2026/Jan/12/2003855671/-1/-1/0/ART...

techreader2 2 hours ago [-]
[dead]
THESMOKINGUN 3 hours ago [-]
[dead]
huflungdung 4 hours ago [-]
[dead]
kledru 4 hours ago [-]
[flagged]
SanjayMehta 4 hours ago [-]
To Infinity! And Beyond!

Sorry, I couldn't resist.

drsalt 4 hours ago [-]
[flagged]
paulryanrogers 4 hours ago [-]
What makes them appear childish in your view?
civcounter 4 hours ago [-]
[flagged]
piskov 4 hours ago [-]
[flagged]
4 hours ago [-]
kopirgan 3 hours ago [-]
We will not be divided! United in obeying only orders from woke governments, be it on gender ideology, "misinformation", "fact checking" or takedowns, cancellations, blackouts and bans.
4 hours ago [-]
infamouscow 4 hours ago [-]
[flagged]
hax0ron3 3 hours ago [-]
>The executive branch can categorize AI technology as equivalent to nuclear weapons technology.

Theoretically, but this would run the risk of collapsing the US tech sector, which at this point is a significant part of the strength of the US economy, and thus making it likely that the Republicans will lose power in the next elections.

tomhow 4 hours ago [-]
Please don't fulminate on HN. The guidelines make it clear we're trying for something better here. https://news.ycombinator.com/newsguidelines.html
Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact
Rendered at 05:44:12 GMT+0000 (Coordinated Universal Time) with Vercel.