"Three clicks convert a data point on the map into a formal detection and move it into a targeting pipeline. These targets then move through columns representing different decision-making processes and rules of engagement. The system recommends how to strike each target – which aircraft, drone or missile to use, which weapon to pair with it – what the military calls a “course of action”. The officer selects from the ranked options, and the system, depending on who is using it, either sends the target package to an officer for approval or moves it to execution."
----------------
Maven is a tool for use in the middle of a war. When both sides are firing, minutes saved can mean lives saved for your side. Those lives, at least partly, balance the risks of hitting a bad target.
This was not a strike made in the middle of a war. If Maven was used in the strike that took out a school, it was being used as part of a sneak attack. Nobody was shooting back while this was being planned. Minutes saved were not lives saved. There should have been a priority placed on getting the targets right. Humans should have been double and triple checking every target by other means. This clearly didn't happen. The school was obviously a school that even had its own website. Humans would have spotted this if they had done more than make their three clicks and move on to the next target.
Whoever made the choice to use Maven to plan a sneak attack without careful checking made an unforced error when they had all the time in the world to prevent it. Whether it was overconfidence in their tools or a complete disregard for the lives of civilians that caused this lapse, they are directly responsible for the deaths of those little girls. I sincerely hope there are (although I doubt there will be) consequences for this person beyond taking that guilt to their grave.
jvanderbot 17 minutes ago [-]
I recommend looking closely at the New York Times analysis. There were factors that might have mitigated this as a strike target, but it also really did look like a part of the compound (and it originally was!). Yes, with hindsight, we can definitively know, and with sufficient time each target could probably have been positively ID'd, but there was precisely one mis-strike in 1000s of sorties, so this already is a low error rate. TFA discusses 50 specific strikes all of which missed via automated analysis. That doesn't seem the same.
I don't disagree there. But this is not a case of hallucination, and an existing website is a signal, not a determinant, of the real situation on the ground.
However, you have made a very, very strong assumption that these targets were not carefully evaluated. One that does not seem to be present in TFA or any analysis that I've read. In fact, the article itself quotes those in the know who believe this should have been eliminated as a target.
SlinkyOnStairs 6 minutes ago [-]
> Yes, with hindsight, we can definitively know, and with sufficient time each target could probably have been positively ID'd, but there was precisely one mis-strike in 1000s of sorties, so this already is a low error rate.
This is giving them too much credit.
Hegseth has already shown himself to entirely disregard the notion of War Crime, even by the US military's own already controversial standards. The double strike on the boats in the caribbean are literally the textbook example in US military textbooks of what not to do, and that it is a warcrime.
This was no mistake. It was the obvious outcome of a pattern of reckless action.
btown 5 minutes ago [-]
I agree with everything you said - but it's also the case that a set of parameters were created that, instead of requiring multi-person validation of target validity and provenance, prioritized speed to provide decision makers with options.
This certainly doesn't absolve the person implementing those parameters, but it is equally the responsibility of the very top of the decision-making structure.
embedding-shape 16 minutes ago [-]
I agree with your overall sentiment, but how realistic is it? Israel/US says they've been hitting thousands of targets (so reality might mean ~hundreds, still a lot), how are they supposed to verify this at all?
> Humans should have been double and triple checking every target by other means.
How practically would this happen? The US/Israel don't want people on the ground, and people on the ground is exactly the only way you can actually verify stuff like this, not every place in the world is on Google Maps or have a web presence at all, so the only realistic way to verify this would be to visually inspect it in person, something neither parties who started this war want to do.
Even better, don't make attacks against other soverign nations that don't pose an immediately and critical threat to you, and this whole conflict could have been avoided in the first place.
But no, the president has to be involved in some sort of child-trafficking scheme, so pulling the country into a war seemed preferable to being held responsible, and now we're here, arguing about fucking details that don't matter.
free_bip 6 minutes ago [-]
The school literally had its own website. If the AI involved was as smart as the media hype machine makes them out to be, it would have found the website and marked it as a non-target. It never even would have made it to human review.
ok_dad 12 minutes ago [-]
In this case, they would have discovered it was a school with a Google search, basically. There’s no excuse.
jdross 9 minutes ago [-]
I'm pretty sure this is the school that was on the corner of a military base, and the school building hit was previously part of the military base.
Tostino 10 minutes ago [-]
Or the vast satellite network we run. Pretty easy to see it's school children going in and out of the area.
keiferski 1 minutes ago [-]
Before it was the gods, then God, then Nature, and now AI. Human beings really have a fundamental issue with accepting responsibility for their actions.
From a certain angle, the entire industrial and computer age looks like a massive effort to remove all responsibility for our actions, permanently.
Lerc 36 minutes ago [-]
"the question that organised the coverage was whether Claude, a chatbot made by Anthropic, had selected the school as a target."
This article is the first I have seen mention of Claude in relation to this specific incident. There's been plenty of talk about AI use in warfare in general but in the case of this school most of the coverage I have seen suggested outdated information and procedures not properly followed.
FartyMcFarter 21 minutes ago [-]
It's definitely been reported before that Claude was used for Iran attacks, at the beginning of March or earlier:
you, today, can use Claude in Amazon Bedrock, and the way that works is, if you want it to be this way: the piece of code and model weights and whatever other artifacts are involved, they are run on Bedrock. Bedrock is not a facade against Claude's token-based-billing RESTful API, where Anthropic runs its own stuff. In the strictest sense, Bedrock can be used as a facade over lower level Amazon services that obey non-engineering, real world concerns like geographic boundaries / physical boundaries, like which physical data center hardware is connected by what where / jurisdictional boundaries, whatever. It's multi-tenancy in the sense that Amazon has multiple customers, but it's not multi-tenancy in the sense that, because you want to pay for these requirements, Amazon has sorted out how to run the Claude model weights, as though it were an open-weights model you downloaded off Hugging Face, without giving you the weights, but letting you satisfy all these other IP and jurisdictional and non-technical requirements that you are willing to pay for, in a way that Anthropic has also agreed.
This is what the dispute with the Pentagon is about, and what people mean when they say Claude is used in government (it is used in Elsa for the FDA for example too). Anthropic doesn't have telemetry, like the prompts, in this agreement, so they have the contract that says what you can and cannot use the model for, but they cannot prove how you use the model, which of course they can if you used their RESTful API service. They can't "just" paraphrase your user data and train on it, like they do on the RESTful API service. There are reasons people want this arrangement ($$$).
The vendor (Palantir) can use, whatever model it wants right? It chose Claude via "Bedrock." I don't know if they use Claude via Bedrock. Ask them. But that's what they are essentially saying, that's what this is about. Palantir could use Qwen3 and run it on datacenter hardware. Do you understand? It matters, but it also doesn't matter.
It's a bunch of red herrings in my opinion, and this sort of stuff being a red herring is what the article is mostly about.
Really fascinating article. Bits of bias here and there, like "The US military has been trying to close the gap between seeing something and destroying it for as long as that gap has existed" -- you can respond to seeing and understanding something without destroying it -- but it underscores, to me at least, how much denser the "fog of war" has become. The fog of media reporting in general. Those first few paragraphs felt like a breath of fresh air.
burnte 16 minutes ago [-]
When AI gets something wrong, it's the operator's fault, IMO.
machinecontrol 39 minutes ago [-]
Interesting article. Seems like AI-washing isn't just for layoffs anymore.
glouwbug 19 minutes ago [-]
What AI does best is remove accountability and ownership
amarant 24 minutes ago [-]
>The targeting for Operation Epic Fury ran on a system called Maven. Nobody was arguing about Maven.
Would it be poor taste to make joke about gradle being superior here? The dad in me really wants to make that joke...
20after4 8 minutes ago [-]
Replacing one java tool with another doesn't solve anyone's problems. If they'd only used Rust then lives would have been saved.
amarant 7 minutes ago [-]
Meh, that sounds like a cargo-cult to me ;)
shykes 26 minutes ago [-]
You can't have a serious discussion of this bombing without addressing the information warfare component. To this day we don't know what actually happened. Between the general public and the facts, there are many middlemen, all with their own distorting factor: the IRGC; the US government; western press outlets such as the Guardian; and the people quoted by the press.
IRGC is making claims that no other party can verify first-hand. Everything from the number of explosions, the extent of the physical damage, the number of wounded and dead, the number of civilians wounded and dead - these are all unverified claims and should be treated as such. Not only is the IRGC obviously biased and incentivized to maximize media pressure on the US and Israel: they are known for information warfare of exactly this nature. To take their statements at face value, and present them as established facts in the opening paragraph, as this article does, is journalistic malpractice.
Again, the basic facts on the ground are not known, yes all parties are projecting narratives with a certainty that we should all be suspicious of.
Without this stable foundation of knowing what actually happened, and why, the very premise of this article collapses on itself.
EDIT: the flurry of responses to this post illustrate the problem. It's difficult to even have a respectful, fact-driven discussion on this topic, because everyone is tempted (and encouraged) to rush to their political battle stations. Nobody wants to discuss information warfare, because they're too busy engaging in it. I think that's worrying and problematic. No matter which "side" you're on, it should be possible to distinguish what is known and what is not; and implementing basic information hygiene. Or do you think you are uniquely immune to disinformation?
20k 24 minutes ago [-]
Everyone acknowledges that the US killed a whole bunch of kids, including the US
shykes 6 minutes ago [-]
This is incorrect. The US government (via Secretary Hegseth) has only confirmed that they are investigating the incident.
What the US has NOT confirmed:
- that they are responsible for the bombing
- who hit the school
- whether the school was an intended target of US strikes
- whether it was struck intentionally
- that it was mistaken for a military site
- any casualty count
- whether there were civilians or children in the casualty count
The US has explicitly DENIED:
- That they deliberately target civilian targets
These are the facts about what the US has actually confirmed. We are all entitled to our opinion of what happened. But we should be able to acknowledge that they are just that: opinions. We don't actually know what happened. And I find it scary and dangerous that so many people, on hacker news and elsewhere, are acting like they do.
> To this day we don't know what actually happened.
I feel like we know enough already. A school was bombed, the ones who did it sucks big time and should be held responsible. Currently, the US and Israel is waging a war against Iran, and one of them dropped the bomb(s), unless suddenly Iran got their hands on American weapons, then that needs to be investigated too, because someone surely dropped the ball at that point.
The basics remain the same, investigations have to be launched to figure out where exactly in the chain of command, someone made a mistake, and then hold that person(s) responsible for their fuck up.
Have those investigations been launched?
applfanboysbgon 19 minutes ago [-]
You are the one engaging in "information warfare", intentionally trying to spread doubt about an event that was confirmed by both Iran and US. What does it feel like to deny the murder of 150+ children out of nationalistic pride? Do you simply have no conscience? No sense of guilt, no concept of morality?
WarmWash 3 minutes ago [-]
I feel like an intellectual god to have been gifted the brain power to recognize that 150 kids being killed is a awful tragedy, and that converting a building on a military base to a school is recklessly stupid and borderline purposely done as a trap.
shykes 4 minutes ago [-]
The US government (via Secretary Hegseth) has only confirmed that they are investigating the incident.
What the US has NOT confirmed:
- that they are responsible for the bombing
- who hit the school
- whether the school was an intended target of US strikes
- whether it was struck intentionally
- that it was mistaken for a military site
- any casualty count
- whether there were civilians or children in the casualty count
The US has explicitly DENIED:
- That they deliberately target civilian targets
These are the facts about what the US has actually confirmed. We are all entitled to our opinion of what happened. But we should be able to acknowledge that they are just that: opinions. We don't actually know what happened. And I find it scary and dangerous that so many people, on hacker news and elsewhere, are acting like they do.
I think its fair to treat things that the Trump administration and the Iranian military agree on as facts. If they were distortions that favored one side, we would see pushback from the other. Maybe there are distortions that somehow benefit both of these parties, but it seems unlikely. At minimum, then, this was a school, the Americans bombed it, and children died as a result.
shykes 2 minutes ago [-]
No. The only thing that the US government and IRGC agree on, at the moment, is that there was an explosion at the site of the school.
The US did NOT confirm that they are responsible for the bombing, or that children (or anyone) died as a result. This is a verifiable fact.
So, applying your own principle: the only thing you should treat as fact, is that there was an explosion at a school.
dede2026 22 minutes ago [-]
Holy gaslighting bootlicking
jameskilton 55 minutes ago [-]
Something that a lot of tech people, especially in Silicon Valley, seem to want to forget, is that at every level you still have people making decisions. AI is suggesting but someone, somewhere, still has to make the decision to act on that suggestion.
It's still people doing people things.
idle_zealot 52 minutes ago [-]
The immediate concern isn't really fully autonomous systems, it's that the nature and design of recommender/suggestion systems prompt humans to sleepwalk through their responsibilities.
They've now burnt though almost ONE THOUSAND of those
They cost $4 million each, so that's another $4 BILLION that has to be replaced too
Imagine several more months of that or even through 2029
tomasphan 20 minutes ago [-]
It’s a tale as old as time: start a war to support the military industrial complex. Imagine a $4 billion investment into public transportation or parks. Every 10 years we can invest into a new city instead of bombing some kids overseas (whose siblings, fueled by hatred, then commit terror attacks on the west).
O3marchnative 17 minutes ago [-]
The Royal United Services Institute (RUSI) has an updated tally on defensive and offensive munition expenditures. It's likely not 100% accurate due to the sensitive nature of those figures.
> 11,294 munitions in the first 16 days of the conflict, at a cost of approximately $26 billion.
So far, they are not funded to do this for that long. They have floated a $200B bill to congress, which made national news coverage. It would start a huge, prolonged fight over the war and actually force them to ask permission from congress to fight it (barring totally disregarding the constitution which is still a possibility).
Unfortunately I can very well imagine several more months and years of this. We are still fighting a forever war that started in 2001. This is all a generation of Americans will know, and that is sad.
ceejayoz 28 minutes ago [-]
We'll run out long before 2029. The 850 fired so far is about a quarter of the entire supply.
A portion of a military base was converted to a school.
This was a tragic disaster waiting to happen from the very start.
This isn't an "AI or not" issue at all.
This was a choice to use children as human shields, and a choice to make war on a foreign sovereign nation.
Let's suppose the US accurately bombed the center of the military base, and the explosion destroyed the adjacent school and killed the children inside. Would that change anything of import? I don't think so.
By your logic it's the federal government's fault those 3000 people died on 9/11, they were being used as human shields.
paganel 10 minutes ago [-]
They (the Americans) should have also marked the schools on said military maps of theirs, and hence they could have made a value judgment of "is it worth killing some IRGC men in the middle of nowhere vs. the international backslash of killing school-going children?". It looks like they most probably didn't do that, probably because their "advanced" AI systems didn't bother with marking schools on their military maps.
Ylpertnodi 16 minutes ago [-]
American bases in europe have schools on them.
Fair targets?
nahuel0x 25 minutes ago [-]
Israel and the US are bombing lots of schools and hospitals and civilian infrastructure, this is not the only case. This is intentional genocide, not a software/organizational/human error.
lukifer 21 minutes ago [-]
Sufficiently advanced negligence is indistinguishable from malice.
This is not to say that this administration is definitely not targeting civilians or infrastructure on purpose; just that the end result, and the moral culpability, are the same in either case.
ognav 51 minutes ago [-]
The Guardian carrying water for the AI industry. The distinction between Maven and Claude is futile. We get that Maven is Palantir, but it integrates Claude:
Going into a generic rant about anti-AI people after missing sources and believing the Department of War is just extremely poor journalism from the newspaper that destroyed evidence after a command from GCHQ.
I hope this is a single "journalist" and that the Guardian has not been bought.
phillipcarter 41 minutes ago [-]
I assume you actually read the article and didn't just post this after a quick skim, yes? Because saying this:
> The distinction between Maven and Claude is futile
Doesn't make any sense at all when you read the article and understand what Claude actually does in this equation. From the article:
> Neither Claude nor any other LLMs detects targets, processes radar, fuses sensor data or pairs weapons to targets. LLMs are late additions to Palantir’s ecosystem. In late 2024, years after the core system was operational, Palantir added an LLM layer – this is where Claude sits – that lets analysts search and summarise intelligence reports in plain English. But the language model was never what mattered about this system.
The whole point here is that whether an LLM is involved or not is immaterial to the system as a whole, and it's a disservice to the public to focus on LLMs here.
niam 34 minutes ago [-]
The article you're responding to is making specific operational claims about Claude's (basically non-) relevance. I'd be interested to hear if you're directionally correct, but forgive me if I need more details from your counterargument than "but it integrates Claude".
sailfast 28 minutes ago [-]
This is not a correct take at all given the contents of the article.
CamperBob2 44 minutes ago [-]
Better than carrying water for people who blame inanimate tools for their own personal and professional failures.
This unknown Guardian contributor writes a missive against "Luddites" while using the typical AI booster arguments that always turn around anti AI arguments.
Just like two five year olds: "You have a big nose." "No, you have a big nose."
We learn from this clown that anti AI people suffer from AI psychosis because they are reading WaPo and Reuters.
simonw 29 minutes ago [-]
Both the Washington Post and the Guardian articles agree that the system used here was Maven.
The key sentence in that Washington Post article appears to be:
> The Pentagon began to integrate Anthropic’s Claude chatbot into Maven in late 2024, according to public announcements.
> Anthropic and Palantir Technologies Inc. (NYSE: PLTR) today announced a partnership with Amazon Web Services (AWS) to provide U.S. intelligence and defense agencies access to the Claude 3 and 3.5 family of models on AWS. This partnership allows for an integrated suite of technology to operationalize the use of Claude within Palantir’s AI Platform (AIP) while leveraging the security, agility, flexibility, and sustainability benefits provided by AWS.
491827-17182 19 minutes ago [-]
We know that Palantir used AI for target selection in Gaza:
We know that it integrated Claude and Claude was deemed to be a supply chain risk just before the Iran war. So it is not a huge mental leap to assume what it is being used for.
You won't get an answer from Hegseth. This Guardian "article" is by a Substack blogger who also does not have answers.
Rendered at 18:12:35 GMT+0000 (Coordinated Universal Time) with Vercel.
----------------
Maven is a tool for use in the middle of a war. When both sides are firing, minutes saved can mean lives saved for your side. Those lives, at least partly, balance the risks of hitting a bad target.
This was not a strike made in the middle of a war. If Maven was used in the strike that took out a school, it was being used as part of a sneak attack. Nobody was shooting back while this was being planned. Minutes saved were not lives saved. There should have been a priority placed on getting the targets right. Humans should have been double and triple checking every target by other means. This clearly didn't happen. The school was obviously a school that even had its own website. Humans would have spotted this if they had done more than make their three clicks and move on to the next target.
Whoever made the choice to use Maven to plan a sneak attack without careful checking made an unforced error when they had all the time in the world to prevent it. Whether it was overconfidence in their tools or a complete disregard for the lives of civilians that caused this lapse, they are directly responsible for the deaths of those little girls. I sincerely hope there are (although I doubt there will be) consequences for this person beyond taking that guilt to their grave.
I don't disagree there. But this is not a case of hallucination, and an existing website is a signal, not a determinant, of the real situation on the ground. However, you have made a very, very strong assumption that these targets were not carefully evaluated. One that does not seem to be present in TFA or any analysis that I've read. In fact, the article itself quotes those in the know who believe this should have been eliminated as a target.
This is giving them too much credit.
Hegseth has already shown himself to entirely disregard the notion of War Crime, even by the US military's own already controversial standards. The double strike on the boats in the caribbean are literally the textbook example in US military textbooks of what not to do, and that it is a warcrime.
This was no mistake. It was the obvious outcome of a pattern of reckless action.
This certainly doesn't absolve the person implementing those parameters, but it is equally the responsibility of the very top of the decision-making structure.
> Humans should have been double and triple checking every target by other means.
How practically would this happen? The US/Israel don't want people on the ground, and people on the ground is exactly the only way you can actually verify stuff like this, not every place in the world is on Google Maps or have a web presence at all, so the only realistic way to verify this would be to visually inspect it in person, something neither parties who started this war want to do.
Even better, don't make attacks against other soverign nations that don't pose an immediately and critical threat to you, and this whole conflict could have been avoided in the first place.
But no, the president has to be involved in some sort of child-trafficking scheme, so pulling the country into a war seemed preferable to being held responsible, and now we're here, arguing about fucking details that don't matter.
From a certain angle, the entire industrial and computer age looks like a massive effort to remove all responsibility for our actions, permanently.
This article is the first I have seen mention of Claude in relation to this specific incident. There's been plenty of talk about AI use in warfare in general but in the case of this school most of the coverage I have seen suggested outdated information and procedures not properly followed.
https://www.theguardian.com/technology/2026/mar/01/claude-an...
Edit: Also, https://www.washingtonpost.com/technology/2026/03/04/anthrop...
OK. The US probably also used telephones and Diet Coke.
Nothing cited said that Claude was selecting targets or informing target selection.
you, today, can use Claude in Amazon Bedrock, and the way that works is, if you want it to be this way: the piece of code and model weights and whatever other artifacts are involved, they are run on Bedrock. Bedrock is not a facade against Claude's token-based-billing RESTful API, where Anthropic runs its own stuff. In the strictest sense, Bedrock can be used as a facade over lower level Amazon services that obey non-engineering, real world concerns like geographic boundaries / physical boundaries, like which physical data center hardware is connected by what where / jurisdictional boundaries, whatever. It's multi-tenancy in the sense that Amazon has multiple customers, but it's not multi-tenancy in the sense that, because you want to pay for these requirements, Amazon has sorted out how to run the Claude model weights, as though it were an open-weights model you downloaded off Hugging Face, without giving you the weights, but letting you satisfy all these other IP and jurisdictional and non-technical requirements that you are willing to pay for, in a way that Anthropic has also agreed.
This is what the dispute with the Pentagon is about, and what people mean when they say Claude is used in government (it is used in Elsa for the FDA for example too). Anthropic doesn't have telemetry, like the prompts, in this agreement, so they have the contract that says what you can and cannot use the model for, but they cannot prove how you use the model, which of course they can if you used their RESTful API service. They can't "just" paraphrase your user data and train on it, like they do on the RESTful API service. There are reasons people want this arrangement ($$$).
The vendor (Palantir) can use, whatever model it wants right? It chose Claude via "Bedrock." I don't know if they use Claude via Bedrock. Ask them. But that's what they are essentially saying, that's what this is about. Palantir could use Qwen3 and run it on datacenter hardware. Do you understand? It matters, but it also doesn't matter.
It's a bunch of red herrings in my opinion, and this sort of stuff being a red herring is what the article is mostly about.
Would it be poor taste to make joke about gradle being superior here? The dad in me really wants to make that joke...
IRGC is making claims that no other party can verify first-hand. Everything from the number of explosions, the extent of the physical damage, the number of wounded and dead, the number of civilians wounded and dead - these are all unverified claims and should be treated as such. Not only is the IRGC obviously biased and incentivized to maximize media pressure on the US and Israel: they are known for information warfare of exactly this nature. To take their statements at face value, and present them as established facts in the opening paragraph, as this article does, is journalistic malpractice.
Again, the basic facts on the ground are not known, yes all parties are projecting narratives with a certainty that we should all be suspicious of.
Without this stable foundation of knowing what actually happened, and why, the very premise of this article collapses on itself.
EDIT: the flurry of responses to this post illustrate the problem. It's difficult to even have a respectful, fact-driven discussion on this topic, because everyone is tempted (and encouraged) to rush to their political battle stations. Nobody wants to discuss information warfare, because they're too busy engaging in it. I think that's worrying and problematic. No matter which "side" you're on, it should be possible to distinguish what is known and what is not; and implementing basic information hygiene. Or do you think you are uniquely immune to disinformation?
What the US has NOT confirmed:
- that they are responsible for the bombing - who hit the school - whether the school was an intended target of US strikes - whether it was struck intentionally - that it was mistaken for a military site - any casualty count - whether there were civilians or children in the casualty count
The US has explicitly DENIED:
- That they deliberately target civilian targets
These are the facts about what the US has actually confirmed. We are all entitled to our opinion of what happened. But we should be able to acknowledge that they are just that: opinions. We don't actually know what happened. And I find it scary and dangerous that so many people, on hacker news and elsewhere, are acting like they do.
Sources:
- https://www.war.gov/News/Transcripts/Transcript/Article/4421...
- https://www.war.gov/News/Transcripts/Transcript/Article/4434...
I feel like we know enough already. A school was bombed, the ones who did it sucks big time and should be held responsible. Currently, the US and Israel is waging a war against Iran, and one of them dropped the bomb(s), unless suddenly Iran got their hands on American weapons, then that needs to be investigated too, because someone surely dropped the ball at that point.
The basics remain the same, investigations have to be launched to figure out where exactly in the chain of command, someone made a mistake, and then hold that person(s) responsible for their fuck up.
Have those investigations been launched?
What the US has NOT confirmed:
- that they are responsible for the bombing - who hit the school - whether the school was an intended target of US strikes - whether it was struck intentionally - that it was mistaken for a military site - any casualty count - whether there were civilians or children in the casualty count
The US has explicitly DENIED:
- That they deliberately target civilian targets
These are the facts about what the US has actually confirmed. We are all entitled to our opinion of what happened. But we should be able to acknowledge that they are just that: opinions. We don't actually know what happened. And I find it scary and dangerous that so many people, on hacker news and elsewhere, are acting like they do.
Sources:
- https://www.war.gov/News/Transcripts/Transcript/Article/4421...
- https://www.war.gov/News/Transcripts/Transcript/Article/4434...
The US did NOT confirm that they are responsible for the bombing, or that children (or anyone) died as a result. This is a verifiable fact.
So, applying your own principle: the only thing you should treat as fact, is that there was an explosion at a school.
It's still people doing people things.
They've now burnt though almost ONE THOUSAND of those
They cost $4 million each, so that's another $4 BILLION that has to be replaced too
Imagine several more months of that or even through 2029
> 11,294 munitions in the first 16 days of the conflict, at a cost of approximately $26 billion.
A detailed table is in the link below.
https://www.rusi.org/explore-our-research/publications/comme...
Unfortunately I can very well imagine several more months and years of this. We are still fighting a forever war that started in 2001. This is all a generation of Americans will know, and that is sad.
https://www.reuters.com/business/aerospace-defense/us-uses-h...
This was a tragic disaster waiting to happen from the very start.
This isn't an "AI or not" issue at all.
This was a choice to use children as human shields, and a choice to make war on a foreign sovereign nation.
Let's suppose the US accurately bombed the center of the military base, and the explosion destroyed the adjacent school and killed the children inside. Would that change anything of import? I don't think so.
By your logic it's the federal government's fault those 3000 people died on 9/11, they were being used as human shields.
This is not to say that this administration is definitely not targeting civilians or infrastructure on purpose; just that the end result, and the moral culpability, are the same in either case.
https://www.reuters.com/technology/palantir-faces-challenge-...
Going into a generic rant about anti-AI people after missing sources and believing the Department of War is just extremely poor journalism from the newspaper that destroyed evidence after a command from GCHQ.
I hope this is a single "journalist" and that the Guardian has not been bought.
> The distinction between Maven and Claude is futile
Doesn't make any sense at all when you read the article and understand what Claude actually does in this equation. From the article:
> Neither Claude nor any other LLMs detects targets, processes radar, fuses sensor data or pairs weapons to targets. LLMs are late additions to Palantir’s ecosystem. In late 2024, years after the core system was operational, Palantir added an LLM layer – this is where Claude sits – that lets analysts search and summarise intelligence reports in plain English. But the language model was never what mattered about this system.
The whole point here is that whether an LLM is involved or not is immaterial to the system as a whole, and it's a disservice to the public to focus on LLMs here.
https://www.washingtonpost.com/technology/2026/03/04/anthrop...
This unknown Guardian contributor writes a missive against "Luddites" while using the typical AI booster arguments that always turn around anti AI arguments.
Just like two five year olds: "You have a big nose." "No, you have a big nose."
We learn from this clown that anti AI people suffer from AI psychosis because they are reading WaPo and Reuters.
The key sentence in that Washington Post article appears to be:
> The Pentagon began to integrate Anthropic’s Claude chatbot into Maven in late 2024, according to public announcements.
As far as I can tell this is the public announcement - a press release from November 2024: https://www.businesswire.com/news/home/20241107699415/en/Ant...
> Anthropic and Palantir Technologies Inc. (NYSE: PLTR) today announced a partnership with Amazon Web Services (AWS) to provide U.S. intelligence and defense agencies access to the Claude 3 and 3.5 family of models on AWS. This partnership allows for an integrated suite of technology to operationalize the use of Claude within Palantir’s AI Platform (AIP) while leveraging the security, agility, flexibility, and sustainability benefits provided by AWS.
https://www.972mag.com/lavender-ai-israeli-army-gaza/
We know that it integrated Claude and Claude was deemed to be a supply chain risk just before the Iran war. So it is not a huge mental leap to assume what it is being used for.
You won't get an answer from Hegseth. This Guardian "article" is by a Substack blogger who also does not have answers.