So will there be droves of people canceling their Claude subscription too?
None of these companies are clean and I think it’s hilarious HN and the rest of SV has been duped by Dario. He’s playing the game better than Sam is, imo. Nothing Dario has said has indicated he is regretful about their partnership with Palantir or any of the stuff they’ve done with the DoD in the past 2.5 years.
" Frankfurt determines that bullshit is speech intended to persuade without regard for truth. The liar cares about the truth and attempts to hide it; the bullshitter doesn't care whether what they say is true or false."
This quote comes to mind whenever Dario Amodei opens his mouth.
I remember considering applying for Anthropic about 1.5 years ago because they seemed like a somewhat more ethical AI company. Then I learned about their Palantir partnership and realized it was all just a clever marketing gimmick.
I'm really curious to understand why this was done and why it was necessary. I cannot imagine the AI was used to identify targets without any base information. ie, I imagine the military already had a list of targets and locations. How would the AI know from satellite imagery that something was a military target.
If that's the case, why did they need help selecting targets? I can only imagine that the military bases and targets are well known and well studied. What would they have actually needed AI assistance for?
This is wild guess. I'm working with GIS and Claude has proven to be extremely savvy. I can see operators throwing hundreds of layers and coming back with a "there a possible military installation here". Same tech that is used to find unregulated pools, or measuring the density of parking lots, or Nazca lines, but much more on demand for specific purposes.
Just to clarify, I don't condone the use of AI for guessing targets, but I think that's what may be going on here.
Just curious, for what exact purpose are you using Claude? Does it analyze geographic data for you or images or something? Or does it help you writing code that does this? Or does it create visualization for you?
All of those. Applied to urban development; It analyzes GIS layers, validates and extracts data that will be used in text reports, huge time saving mechanical extraction, before it was a bunch of manual steps on each project. I used it to develop a heavy GIS application that hubs many public data providers. And It does help us create the maps/visualizations from that data, again a sort of mechanical transformation. None of these are groundbreaking, but we you stack all of them you end with big time savings.
Throw Claude - or any other very good LLM - a bunch of data and it'll give results in seconds/minutes. Very similar to how sometimes when I run into a stubborn bug, I use a tracer to capture everything that's happening. Before, it'd take me sometimes hours and several scans before I find the issue, and maybe a bit more time to fix it depending on the severity. Now I just throw that full trace at Claude, and so far, 100% of the time it finds and resolves the issue in a few seconds. If the problem/question can be fixed/answered by what's in the given data, I've found Claude to be very good at finding it. And if you haven't given it the data, it's also very good at hypothesizing and telling you where+how to get relevant data so it can prove its hypothesis.
Not much different from giving it a bunch of satellite, traffic, etc data of a target and asking it to find areas to prioritize based on movement of particular personnel/equipment, etc.
I am guessing this is the case where a pre created narrative ("Killing AI bots") is force fitted into a more nuanced instance of reality. These are probably analysts that use Claude that has an MCP server to some palantir half-assed intelligence product. This is then pushed by non-technical journalists as some doomsday immoral non-human killing machine.
This is similar to the story behind Cambridge Analytica, which was essentially a company using crapware facebook API software to harvest voters data, but the media version of it was "tech companies can change election results" which echoed a season plotline of House of Cards (and older narratives of shadow forces)
You have 1000 launches going off from Iran in 24 hour period. Do you have hundreds of people look at pixels of prUmpt the AI to track it back to sources?
Note that the headline isn't about how effective hitting those targets was, or how successful at achieving its aims the bombing campaign was.
They hit 1000 targets in 24 hours. And yet, a week later, the Iranian regime is intact, American allies are still under constant bombardment, interceptor stocks are running low, and half of America's long-range, high-altitude transportable radar have been destroyed.
This looks like shooting the broad side of a barn, and then painting bullseyes around every bullet hole.
Even if you don't care about the needless human suffering the US has caused from this operation, this conflict threatens global stability because of oil supply disruptions, and if the US keeps this up it could get quite bad very quickly.
I worked briefly in defense-tech and there is a huge blindspot in this field. While I worked with a ton of thoughtful, ethical, and talented people from the military, there is a veritable blind spot when it comes to support of the "warfighter." It is certainly noble and worthwhile work to protect soldiers from harm through technology, but I got some sense some people (actually especially the tech people who were never in the military) didn't think enough about the ethical concerns when dealing with people attached to the US's "enemies." And further, what about when the US itself is the aggressor? While active warfighters have to follow chain of command, companies can and should apply ethical constraints--but they often don't because DoD contracts are lucrative and (especially if you're not a prime) hard won.
I've had a lot of fun playing with Claude 4.6, but it is entirely unacceptable that this technology is being used in this conflict with Iran. I will cancel my account once this month's subscription is up in 2 weeks. The US is the aggressor here, that is certain. Support of this conflict as a private company that supposedly is oriented toward ethics is extremely illuminating.
Now with that, I have thought a tremendous amount about whether someone like Dario could even steer the ship away from support of a conflict like this at this point. We are all susceptible to market forces, and companies like Anthropic need as much revenue as possible to be able to maintain themselves and grow given the cost of training. There is certainly an argument to be made that if he did so, he might lose confidence of investors and lose control entirely. This begs the question: is shareholder/capital optimization the best way to organize our society?
> We are all susceptible to market forces, and companies like Anthropic need as much revenue as possible to be able to maintain themselves and grow given the cost of training.
There's also the consideration that if they come across at too against US military support, the administration can and will make things extremely painful for them. I suspect they've actually gotten off pretty easy just being named a supply chain risk (so far). Imagine the backlash if they'd for example accepted contracts with China. Or even made so much as a hint that they weren't open to most military use cases.
As soon as you accept "we need to survive to do good," survival becomes the priority and the good becomes negotiable. And so every compromise reduces their ethical position a little more.
Living in accordance with an ethical framework only matters when that decision is hard. There are clearly consequences to doing so. But Anthropic has clearly forfeited their right to claim the moral high ground. Their posturing against OpenAI is based on a false dichotomy: they are arguing around a cutout incredibly minor commensurate with their broader exposure.
I think Anthropic can avoid contracting with the military at this stage, with all of their babbling about alignment, and not actively contract with China.
I've found that reading odds and ends outside of my own academic, professional, or theoretical interests nets some interesting things sometimes.
At one point I got curious about how the US military thinks about insurgencies, so I read their manual on how to fight them. As someone holding a lot of dissident views in the US it was pretty interesting.
One thing I took away was the feeling that at no time did the manual ever define what an "insurgent" is, beyond whoever the US government tells them the insurgents are.
So you have as situation where, ultimately, there's no external reality testing, and reality is simply whatever "reality" is as defined by the command structure.
I know that sounds overly simple- of course military follows a chain of command, unquestionable right up to its civilian commander in chief.
Why I feel that is a useful observation is that, to your question, people are constantly deferring their ethical judgements. And I suspect there is some cognitive bias in play that allows folks to feel that deferral can't happen across all these systems.
In the case of businesses, it is to "the market"-- which is reactive and as such doesn't have "judgement", and even if it did it's needs aren't "human" so relying on it as a human seems dangerous. So to your question, my answer is usually "probably not". And further, unless people stop deferring their judgments to the imaginary of the spectacular market, eventually shits gonna break.
In the case of the military, we can see what happens when radically nihilistic (pedophilic and sociopathic media personalities) are put at the helm.
My larger point, though, is that our usual assumption seems to be that all these other folks are likely to exercise their faculties to test out reality and hopefully, when it doesn't line up with that reality, push back and prevent dumb shit from happening.
But all these systems are set up to prevent that from happening, it doesn't seem at all strange to me that these systems are starting to break in the ways that the seem to be failing.
> Witnesses and an education ministry official said that the school was located on a compound that was a base for the Islamic Revolutionary Guard Corps until about 15 years ago.
In this case it seems plausible that the military would have an outdated database, and that an LLM would have "known" it wasn't a base anymore, assuming the LLM was trained on documents/maps with this up to date information.
School was part of the base few years back, there is still guard tower right next to it and tons of social media pictures of military doing military things inside the school. No need for AI to make that mistake.
I've witnessed plenty of derision towards liberal arts and humanities in the tech industry and here on HN. For some reason a lot of industry workers believe that STEM is all that matters and somehow their work exists wholly separate from its consequences and the context in which it was created.
If you ask me, its the job insecurity and the culture we have around in the sense of hustle culture where ethics are lost.
My generation feels more replacable than ever and this leads to ethics being lost. Ethics can be diluted very easily if you make people wonder about food on the table.
I am in school and ethics aren't an concern when we discuss and I am not sure treating it as a subject could help either. Perhaps but I do feel at some point, it has to have with people feeling a sense of job security.
As a society as well, we have to probably do something to reward ethics. Especially when not following ethics sometimes leads to so much financial gains.
To me, the way I see it, people sometimes start doing immoral things because they have to put food on the table and then greed takes over.
But that being said, I am not sure how job insecurity/this culture can be fixed by a single measure but I just wanted to point out that there's more nuance to it. The only way to meaningfully solve is with having discussions on this topic and having actual change take place.
We feel like we go grease ourselves in studies and try to get a job and even when we do but many of us are still not able to afford a house at times :<
I think they mean a general “anti war machine” sentiment in this forum. +1 to that sentiment from me, we are going quickly from “clean nuclear power” to “big bombs!”
This would be a really interesting topic to explore from a pure intellectual curiosity perspective. How/why/when this could be used, what could be the ramifications, do other entities also do this, would using a "dumber" but non "safety" aligned model work differently, etc. There are so many things to think about here. Unfortunately it seems that most of the commenters are going for the political / emotional jabs. Oh well.
This article is a total overstatement designed to boost stock prices and none of the actual users can counter the claim because it would require revealing classified information.
This is the same kind of claim you’ve all seen before about AI systems doing something amazing and it’s really just a bunch of people sitting in a call center in a third world country controlling the system remotely.
Only in this case it’s a bunch of senior airmen and staff sergeants sitting in an intel shop doing all the work. Sure, Palantir made a UI but it just plain sucks. And Claude probably fixed some typos in the targeting packages. But let’s not believe that either system was influential to target selection. CENTCOM created a similar number of targets at the beginning of the Syrian civil war before any of these LLMs existed and it took a similar amount of time. We ended up not striking them, but the plans were made after Assad used chemical weapons. All the fixed locations in Iran had packages written and sitting on the shelf before Trump was even elected the first time. The AI in this war added basically no value.
Any claim that Palantir did something useful for the government should immediately be viewed as suspect. I’ve used their software, and it sucks. I cannot understand how they got such big contracts to make a shitty UI that poorly integrates other systems’ data.
Do any end users actually use the Palantir AI? I've always assumed that the big customers of it would just put their own custom UI on it that was actually usable.
Does it matter what they thought? Anthropic held to a set of T&Cs in a way that had the Pentagon making threats to render Anthropic a supply chain risk, Anthropic either didn't care or thought it was a bluff, it wasn't a bluff and they're now suing over being called a supply chain risk.
It would have been a really weird hill to risk dying on if they didn't actually care. (Not that "this is weird" is still enough by itself to reach any conclusions, given that a lot of newsworthy decision making in the world right now is of a quality that it seems to have been done by a LLM, perhaps with as much human involvement as changing a font or typing "continue").
But how? The models are thick as shit. They can regurgitate existing knowledge, but it’s not like Iran and its military installations are publicly documented. And you can’t trust any decision it makes if you feed it the information, because it just makes up answers. It has no actual intelligence.
You're being downvoted, but if Anthropic is going to deploy Claude for decision making in target prosecution it is clearly a "Caesar's wife must be above suspicion" moment. Association with is guilt unless proven otherwise.
AI target-selection systems have become a loophole that removes the link between the decision-maker (who should bear responsibility within the military bureaucracy) and the actual action taken. Israel became the implementer of this model in Gaza (Palantir was most likely part of this system as well).
Let us recall what former Israeli Chief of Staff Herzi Halevi reportedly conveyed about a meeting with Netanyahu. The IDF said they had struck 1,400 targets, yet Netanyahu reportedly slammed the table and angrily asked why it wasn’t 5,000, and said “bomb everywhere and destroy the houses.”
For the military bureaucracy, the fact that AI can speculate or generate potential targets (which is entirely possible with LLM systems) becomes a convenient mechanism that, at least on paper, allows them to distance themselves from responsibility.
Now let’s look at the statements made by Anthropic and Hegseth:
From Anthropic’s own statement, we hear that they have actually been quite closely partnered. In Hegseth’s tweet we see:
“Anthropic will continue to provide the Department of War its services for a period of no more than six months to allow for a seamless transition to a better and more patriotic service.”
This shows that Anthropic is still currently being actively used by the Department of War.
My view is that Anthropic and its investors eventually realized that the American war machine will use their technology in reckless ways, and that this will certainly create a massive PR disaster or, in an ideal world, even legal consequences. That realization likely pushed them to adopt what they now frame as a more “humanitarian” position. We have already seen incidents where roughly 180 children were killed due to faulty targeting, assuming and hoping it was not intentional.
People in the HN comments would rather wait for Iran to get nuclear weapons and detonate them on their neighbors than use AI to do surgical strikes on Iran to take out these programs.
How are either of these statements evidence that the strikes have been surgical?
What you are saying here is that the US is not capable of only hitting the stuff it wants and that it hits stuff completely unrelated to nuclear capabilities or even the current theater but kindly provided notice before murdering people?
So will there be droves of people canceling their Claude subscription too?
None of these companies are clean and I think it’s hilarious HN and the rest of SV has been duped by Dario. He’s playing the game better than Sam is, imo. Nothing Dario has said has indicated he is regretful about their partnership with Palantir or any of the stuff they’ve done with the DoD in the past 2.5 years.
Edit: this Washington Post article seems to be the original source: https://www.washingtonpost.com/technology/2026/03/04/anthrop...
> None of these companies are clean and I think it’s hilarious HN and the rest of SV has been duped by Dario.
Exactly. Anthropic is to the rest of the AI companies as capital-loyal Democrats are to capital-loyal Republicans of the uniparty.
The choices we are given are merely an illusion. The notion of "lesser of 2 etc evils" is fundamentally flawed.
https://en.wikipedia.org/wiki/On_Bullshit
" Frankfurt determines that bullshit is speech intended to persuade without regard for truth. The liar cares about the truth and attempts to hide it; the bullshitter doesn't care whether what they say is true or false."
This quote comes to mind whenever Dario Amodei opens his mouth.
I remember considering applying for Anthropic about 1.5 years ago because they seemed like a somewhat more ethical AI company. Then I learned about their Palantir partnership and realized it was all just a clever marketing gimmick.
I'm really curious to understand why this was done and why it was necessary. I cannot imagine the AI was used to identify targets without any base information. ie, I imagine the military already had a list of targets and locations. How would the AI know from satellite imagery that something was a military target.
If that's the case, why did they need help selecting targets? I can only imagine that the military bases and targets are well known and well studied. What would they have actually needed AI assistance for?
This is wild guess. I'm working with GIS and Claude has proven to be extremely savvy. I can see operators throwing hundreds of layers and coming back with a "there a possible military installation here". Same tech that is used to find unregulated pools, or measuring the density of parking lots, or Nazca lines, but much more on demand for specific purposes.
Just to clarify, I don't condone the use of AI for guessing targets, but I think that's what may be going on here.
You may be grossly overestimating the amount of thought than went into this:
"For instance, Israel has bombed a park in Tehran called "Police park." It has nothing to do with the police."[1]
1. https://x.com/tparsi/status/2029555364262228454
Just curious, for what exact purpose are you using Claude? Does it analyze geographic data for you or images or something? Or does it help you writing code that does this? Or does it create visualization for you?
All of those. Applied to urban development; It analyzes GIS layers, validates and extracts data that will be used in text reports, huge time saving mechanical extraction, before it was a bunch of manual steps on each project. I used it to develop a heavy GIS application that hubs many public data providers. And It does help us create the maps/visualizations from that data, again a sort of mechanical transformation. None of these are groundbreaking, but we you stack all of them you end with big time savings.
Interesting. I wonder if someone will be guest speaker at one of the podcasts in 30 years time and talk about this kind of stuff
Throw Claude - or any other very good LLM - a bunch of data and it'll give results in seconds/minutes. Very similar to how sometimes when I run into a stubborn bug, I use a tracer to capture everything that's happening. Before, it'd take me sometimes hours and several scans before I find the issue, and maybe a bit more time to fix it depending on the severity. Now I just throw that full trace at Claude, and so far, 100% of the time it finds and resolves the issue in a few seconds. If the problem/question can be fixed/answered by what's in the given data, I've found Claude to be very good at finding it. And if you haven't given it the data, it's also very good at hypothesizing and telling you where+how to get relevant data so it can prove its hypothesis.
Not much different from giving it a bunch of satellite, traffic, etc data of a target and asking it to find areas to prioritize based on movement of particular personnel/equipment, etc.
seems like you're missing the forest (this being a matter of life and death) for the trees (you found some bugs).
My guess is to keep the US economy going.
You mean the immediate personal macroeconomy of a very specific subset of the US political class?
Yes unfortunately that’s how the economy works
I am guessing this is the case where a pre created narrative ("Killing AI bots") is force fitted into a more nuanced instance of reality. These are probably analysts that use Claude that has an MCP server to some palantir half-assed intelligence product. This is then pushed by non-technical journalists as some doomsday immoral non-human killing machine.
This is similar to the story behind Cambridge Analytica, which was essentially a company using crapware facebook API software to harvest voters data, but the media version of it was "tech companies can change election results" which echoed a season plotline of House of Cards (and older narratives of shadow forces)
Google for The Atlantic Dexter Filkins article on targeting with AI.
You have 1000 launches going off from Iran in 24 hour period. Do you have hundreds of people look at pixels of prUmpt the AI to track it back to sources?
Note that the headline isn't about how effective hitting those targets was, or how successful at achieving its aims the bombing campaign was.
They hit 1000 targets in 24 hours. And yet, a week later, the Iranian regime is intact, American allies are still under constant bombardment, interceptor stocks are running low, and half of America's long-range, high-altitude transportable radar have been destroyed.
This looks like shooting the broad side of a barn, and then painting bullseyes around every bullet hole.
Even if you don't care about the needless human suffering the US has caused from this operation, this conflict threatens global stability because of oil supply disruptions, and if the US keeps this up it could get quite bad very quickly.
I worked briefly in defense-tech and there is a huge blindspot in this field. While I worked with a ton of thoughtful, ethical, and talented people from the military, there is a veritable blind spot when it comes to support of the "warfighter." It is certainly noble and worthwhile work to protect soldiers from harm through technology, but I got some sense some people (actually especially the tech people who were never in the military) didn't think enough about the ethical concerns when dealing with people attached to the US's "enemies." And further, what about when the US itself is the aggressor? While active warfighters have to follow chain of command, companies can and should apply ethical constraints--but they often don't because DoD contracts are lucrative and (especially if you're not a prime) hard won.
I've had a lot of fun playing with Claude 4.6, but it is entirely unacceptable that this technology is being used in this conflict with Iran. I will cancel my account once this month's subscription is up in 2 weeks. The US is the aggressor here, that is certain. Support of this conflict as a private company that supposedly is oriented toward ethics is extremely illuminating.
Now with that, I have thought a tremendous amount about whether someone like Dario could even steer the ship away from support of a conflict like this at this point. We are all susceptible to market forces, and companies like Anthropic need as much revenue as possible to be able to maintain themselves and grow given the cost of training. There is certainly an argument to be made that if he did so, he might lose confidence of investors and lose control entirely. This begs the question: is shareholder/capital optimization the best way to organize our society?
> We are all susceptible to market forces, and companies like Anthropic need as much revenue as possible to be able to maintain themselves and grow given the cost of training.
There's also the consideration that if they come across at too against US military support, the administration can and will make things extremely painful for them. I suspect they've actually gotten off pretty easy just being named a supply chain risk (so far). Imagine the backlash if they'd for example accepted contracts with China. Or even made so much as a hint that they weren't open to most military use cases.
As soon as you accept "we need to survive to do good," survival becomes the priority and the good becomes negotiable. And so every compromise reduces their ethical position a little more.
Living in accordance with an ethical framework only matters when that decision is hard. There are clearly consequences to doing so. But Anthropic has clearly forfeited their right to claim the moral high ground. Their posturing against OpenAI is based on a false dichotomy: they are arguing around a cutout incredibly minor commensurate with their broader exposure.
I think Anthropic can avoid contracting with the military at this stage, with all of their babbling about alignment, and not actively contract with China.
I've found that reading odds and ends outside of my own academic, professional, or theoretical interests nets some interesting things sometimes.
At one point I got curious about how the US military thinks about insurgencies, so I read their manual on how to fight them. As someone holding a lot of dissident views in the US it was pretty interesting.
One thing I took away was the feeling that at no time did the manual ever define what an "insurgent" is, beyond whoever the US government tells them the insurgents are.
So you have as situation where, ultimately, there's no external reality testing, and reality is simply whatever "reality" is as defined by the command structure.
I know that sounds overly simple- of course military follows a chain of command, unquestionable right up to its civilian commander in chief.
Why I feel that is a useful observation is that, to your question, people are constantly deferring their ethical judgements. And I suspect there is some cognitive bias in play that allows folks to feel that deferral can't happen across all these systems.
In the case of businesses, it is to "the market"-- which is reactive and as such doesn't have "judgement", and even if it did it's needs aren't "human" so relying on it as a human seems dangerous. So to your question, my answer is usually "probably not". And further, unless people stop deferring their judgments to the imaginary of the spectacular market, eventually shits gonna break.
In the case of the military, we can see what happens when radically nihilistic (pedophilic and sociopathic media personalities) are put at the helm.
My larger point, though, is that our usual assumption seems to be that all these other folks are likely to exercise their faculties to test out reality and hopefully, when it doesn't line up with that reality, push back and prevent dumb shit from happening.
But all these systems are set up to prevent that from happening, it doesn't seem at all strange to me that these systems are starting to break in the ways that the seem to be failing.
Is that what targeted the school? Or was that intentional?
> Witnesses and an education ministry official said that the school was located on a compound that was a base for the Islamic Revolutionary Guard Corps until about 15 years ago.
https://www.nbcnews.com/world/iran/iran-school-strike-us-mil...
In this case it seems plausible that the military would have an outdated database, and that an LLM would have "known" it wasn't a base anymore, assuming the LLM was trained on documents/maps with this up to date information.
> In this case it seems plausible that the military would have an outdated database, and that an LLM would have "known" it wasn't a base anymore,
Ah, yes, because an LLM has never made a mistake ever. It certainly could not have made a mistake. No way.
Also, all databases are perfect. It certainly couldn't have been fed conflicting info. No way.
You don't know what plausible means.
School was part of the base few years back, there is still guard tower right next to it and tons of social media pictures of military doing military things inside the school. No need for AI to make that mistake.
some minor hallucinations
There are not enough ethics courses being taught in schools and universities.
https://calebhearth.com/dont-get-distracted
I've witnessed plenty of derision towards liberal arts and humanities in the tech industry and here on HN. For some reason a lot of industry workers believe that STEM is all that matters and somehow their work exists wholly separate from its consequences and the context in which it was created.
Peter Thiel's degree was in philosophy
If you ask me, its the job insecurity and the culture we have around in the sense of hustle culture where ethics are lost.
My generation feels more replacable than ever and this leads to ethics being lost. Ethics can be diluted very easily if you make people wonder about food on the table.
I am in school and ethics aren't an concern when we discuss and I am not sure treating it as a subject could help either. Perhaps but I do feel at some point, it has to have with people feeling a sense of job security.
As a society as well, we have to probably do something to reward ethics. Especially when not following ethics sometimes leads to so much financial gains.
To me, the way I see it, people sometimes start doing immoral things because they have to put food on the table and then greed takes over.
But that being said, I am not sure how job insecurity/this culture can be fixed by a single measure but I just wanted to point out that there's more nuance to it. The only way to meaningfully solve is with having discussions on this topic and having actual change take place.
We feel like we go grease ourselves in studies and try to get a job and even when we do but many of us are still not able to afford a house at times :<
I’m guessing there won’t be a lot of Palmer Luckey fans among the commenters here
What does this mean? I know he has a weaponry startup, Anduril, that has some Palantir financial ties. But I don't see the connection in the article.
I think they mean a general “anti war machine” sentiment in this forum. +1 to that sentiment from me, we are going quickly from “clean nuclear power” to “big bombs!”
I see. Thanks.
I'm actually a fan of his and on his side. But I also definitely do not believe these things are accurate enough to perform this kind of task yet.
You are absolutely correct I shouldn’t have done that.
> According to the report, the AI tools are also used to evaluate the outcomes of strikes after they are initiated.
Can I get off this train, please?
Whoa, near the top of the front page, and then immediately pushed to the second page of hacker news. Yikes.
This would be a really interesting topic to explore from a pure intellectual curiosity perspective. How/why/when this could be used, what could be the ramifications, do other entities also do this, would using a "dumber" but non "safety" aligned model work differently, etc. There are so many things to think about here. Unfortunately it seems that most of the commenters are going for the political / emotional jabs. Oh well.
This article is a total overstatement designed to boost stock prices and none of the actual users can counter the claim because it would require revealing classified information.
This is the same kind of claim you’ve all seen before about AI systems doing something amazing and it’s really just a bunch of people sitting in a call center in a third world country controlling the system remotely.
Only in this case it’s a bunch of senior airmen and staff sergeants sitting in an intel shop doing all the work. Sure, Palantir made a UI but it just plain sucks. And Claude probably fixed some typos in the targeting packages. But let’s not believe that either system was influential to target selection. CENTCOM created a similar number of targets at the beginning of the Syrian civil war before any of these LLMs existed and it took a similar amount of time. We ended up not striking them, but the plans were made after Assad used chemical weapons. All the fixed locations in Iran had packages written and sitting on the shelf before Trump was even elected the first time. The AI in this war added basically no value.
Any claim that Palantir did something useful for the government should immediately be viewed as suspect. I’ve used their software, and it sucks. I cannot understand how they got such big contracts to make a shitty UI that poorly integrates other systems’ data.
Do any end users actually use the Palantir AI? I've always assumed that the big customers of it would just put their own custom UI on it that was actually usable.
This is the AI we're using to kill people now, surely it won't make any mistakes or confidently target civilians on accident: https://youtube.com/shorts/WxbHtYzBnvo?si=xh4kda_DuNvHFx0L
They literally bombed a school full of girls.
It's irrelevant whether AI "makes any mistakes" or not, there are no morals either way, and people just seem to be desensitised to all of this
"You're totally right, that was a hospital not a terrorist hideaway! My mistake!"
A police park was hit because it had the word "police" in it.
“I have looked up the secret list of nuclear installations like you asked, and here they are:”
Earlier: https://news.ycombinator.com/item?id=47286236
https://news.ycombinator.com/item?id=47248385
The actual article says it helped with planning.
How are they ensuring 0 hallucinations?
What makes you think they have ensured anything?
Apart from anything else, Anthropic don't want to be used for this.
> Apart from anything else, Anthropic don't want to be used for this
what did they think a Palantir contract would be for?
Does it matter what they thought? Anthropic held to a set of T&Cs in a way that had the Pentagon making threats to render Anthropic a supply chain risk, Anthropic either didn't care or thought it was a bluff, it wasn't a bluff and they're now suing over being called a supply chain risk.
It would have been a really weird hill to risk dying on if they didn't actually care. (Not that "this is weird" is still enough by itself to reach any conclusions, given that a lot of newsworthy decision making in the world right now is of a quality that it seems to have been done by a LLM, perhaps with as much human involvement as changing a font or typing "continue").
This is basically impossible. The answer is that the hallucinations probably led to mass deaths
But how? The models are thick as shit. They can regurgitate existing knowledge, but it’s not like Iran and its military installations are publicly documented. And you can’t trust any decision it makes if you feed it the information, because it just makes up answers. It has no actual intelligence.
1k targets and a few hundred school girls.
I wonder what the people who cancelled their ChatGPT subscriptions and switched to Anthropic are thinking now.
All these providers are the same.
Anthropic objected to being used to build autonomous weapons and domestic surveillance. They didn't obejct to being used to hit schools in Iran.
OpenAI didn't object to anything.
They're all bad, but some are worse than others.
who believe here that saying no is possible ?
I'm convinced that Claude is being used to run hype train on HN.
That's option A, option B is pure halo effect. I.e. Claude is so good that people misascribe positive attributes to Anthropic.
Otherwise it really is mind boggling to see people laud Dario's posts which are tone blind to Europeans at least.
I'm European and would like to know what you mean wrt to Europeans.
How much do you value Anthropic's stance that they want to contracttually block domestic mass surveillance of Americans?
Versus OpenAI's stance that they want to block domestic mass surveillance of Americans with monitoring in the loop.
Given you're not American and neither has any issues with mass surveillance anywhere else on the planet.
Or that Anthropic sees itself as similar to the US Department of War.
You're being downvoted, but if Anthropic is going to deploy Claude for decision making in target prosecution it is clearly a "Caesar's wife must be above suspicion" moment. Association with is guilt unless proven otherwise.
Good catch!
Can we get a better source?
That website is broken on mobile and I can’t even scroll to see the source
I can’t be the only one who couldn’t see it on ff/safari
the only thing I see is the cover picture and the 3 sentence "AI Powered Summary"
AI target-selection systems have become a loophole that removes the link between the decision-maker (who should bear responsibility within the military bureaucracy) and the actual action taken. Israel became the implementer of this model in Gaza (Palantir was most likely part of this system as well).
Let us recall what former Israeli Chief of Staff Herzi Halevi reportedly conveyed about a meeting with Netanyahu. The IDF said they had struck 1,400 targets, yet Netanyahu reportedly slammed the table and angrily asked why it wasn’t 5,000, and said “bomb everywhere and destroy the houses.”
For the military bureaucracy, the fact that AI can speculate or generate potential targets (which is entirely possible with LLM systems) becomes a convenient mechanism that, at least on paper, allows them to distance themselves from responsibility.
Now let’s look at the statements made by Anthropic and Hegseth:
https://www.anthropic.com/news/where-stand-department-war
https://x.com/SecWar/status/2027507717469049070
From Anthropic’s own statement, we hear that they have actually been quite closely partnered. In Hegseth’s tweet we see:
“Anthropic will continue to provide the Department of War its services for a period of no more than six months to allow for a seamless transition to a better and more patriotic service.”
This shows that Anthropic is still currently being actively used by the Department of War.
My view is that Anthropic and its investors eventually realized that the American war machine will use their technology in reckless ways, and that this will certainly create a massive PR disaster or, in an ideal world, even legal consequences. That realization likely pushed them to adopt what they now frame as a more “humanitarian” position. We have already seen incidents where roughly 180 children were killed due to faulty targeting, assuming and hoping it was not intentional.
This is media in action. Confuse and gaslight folks so that they stand to accept people like Trump and Elon musk.
People in the HN comments would rather wait for Iran to get nuclear weapons and detonate them on their neighbors than use AI to do surgical strikes on Iran to take out these programs.
So surgical they've included a school and a boat 1000 miles away returning from international exercises.
They didn't intentionally try to attack a school, and also they gave the crew of the ship a chance to evacuate.
How are either of these statements evidence that the strikes have been surgical?
What you are saying here is that the US is not capable of only hitting the stuff it wants and that it hits stuff completely unrelated to nuclear capabilities or even the current theater but kindly provided notice before murdering people?
If it has good enough capacity for the military it is very possible we are receiving dumbed down / nerfed versions of Claude.
Your hypothesis: you and I don't have a good enough version of Claude. My hypothesis: you and I don't have missiles.
good enough capacity as in killing >100 kids in school