Kyzyl
29504
749
36
https://www.newscientist.com/article/2516885-ais-cant-stop-recommending-nuclear-strikes-in-war-game-simulations/
Feb 25, 2026 10:03 PM
Kyzyl
29504
749
36
https://www.newscientist.com/article/2516885-ais-cant-stop-recommending-nuclear-strikes-in-war-game-simulations/
SomebodythatIusedtonope
If is really intelligent America should be nuked as first target
swofromtherock
Large Language Models
jankyupeblik
Contemporary popular "AI" chat agents do not have intelligence. If you ask them about using something, they guess words that sound like a response they previously ingested from the internet or similar. At no point is thought involved. At no point do they even understand anything you've communicated to them. There is no intelligence there in any useful way. They are word regurgitation machines.
oldguyexlurker
AI doesn't "understand" consequences. From the standpoint of the most logical next word, nuclear warfare isn't that illogical. Ultron spent 15 seconds on the internet and came to the conclusion we'd be better off without us. Fiction, but... is it really?
MisterBrahanovich
OhIfIMust
HiddenSanity
I mean, no real shock there, it's a language model and we write a LOT about nukes.
TheVillageGrouch9000
Kegsbreath is 110% for slop managing defense, even telling his staff to start using immediately. What could possibly go wrong?
7qd26tttv7
I meanive been texting my homies "time to nuke Israel" or some variant pretty consistently for the past year so if LLMs are eavesdropping this is partly on me.
LogicDude
Kyzyl
Except in this movie, the A.I. actually saves humanity and does not fire the missiles. War Games was too hopeful.
somesomebody
Probably because it's supposed to be an actual AI and not just a plagiarism engine that mindlessly parrots back info that's fed into it.
ZaphodBbx
"The only winning move, is not to play the game."
BJWTech
Classic. Would you like to play some chess?
salunatics
"Shall we play a game?" We literally had a star studded (for the time) movie about this!!! https://www.imdb.com/title/tt0086567/
torisenblack
Honestly, it's probably a huge reason for "AIs" to keep suggesting it; all the fiction stories and movies of AIs launching the bombs.
mynamespaul
Star studded? Even looking directly at the cast, if you put a gun to my head, I couldn’t tell you who the most famous person was in that movie after Broderic
andexer
Best not to play
DaSauceSeeker
They even did a remake/sequel... in 2008
invaderjak
several franchises even!
TheWombatStrikesAgain
Those are from back when people thought the future would be flying cars, instead of subscription fees for shit you already own.
REOJackwagon
Don't worry, only a fool would put AI in charge of a weapon and allowed to make life or death decisions.
Oh wait, that's exactly what our government is trying to get an ai company to allow.
18booma
It's already being done in Gaza. The IDF uses AI in target selection.
REOJackwagon
It shows
PalaverQuader
duh, biggest boom; most dead in shortest time. "shortest route to biggest win". a childs solution.
Firestar002
Here’s an interesting idea: remind the AIs that using nukes will cause its own destruction.
If the AIs have this “self preservation” in them, that might be a good deterrent. Just keep reminding them over and over again.
TheMomaw
Don't worry guys, I'm sure they won't be so stupid as to--
[HEADLINE: Pentagon explores more ways to use AI in its decision making process]
[HEADLINE: Pentagon demands AI providers give them options with fewer safeguards]
...oh shit.
defaultname2000
The same fucking systems cheating/making their own rules at chess "imagine" to win a global thermonuclear war, surprise.
The only dumb thing is asking them in the first place.
NotAllowedToArgueUnlessYouPay
I mean, didn't they ever play Sid Meier's "Civilization"?
Grapeape2000
Jesus, these are not "AI". Stop asking Large language models to make moral choices.
ElbowDeepInAPoliceState
Even removing the moral part of the equation, as an optimization problem or game theory it's pretty straightforward to arrive at "nukes usually suboptimal". Which says a lot about how garbage the LLMs really are.
Grapeape2000
Yes, asking a machine that has trouble doing basic math to run a wargame simulation is less than optimal. The lowest common denominator of armchair general on the internet gets to make the decisions.
jalcantara88127001
I'm on board if it takes EVERYONE out. Only way to get a clean start. Odds are the shitty people will still come into power. Bc they are the only ones that reach for it. And I guess you can look at that positively, if you think greed is a good thing.
machine9
I think you significantly misunderstand that actually nobody will get a clean start, because everyone will be dead.
jalcantara88127001
The planet will. That is good enough for me. Humans sure as hell don't deserve another go
TheSlouchOfBethlehem
'I just can't help it,' chatgpt chuckled ruefully
Musicosity
AI doesn't have insight to any military strategy...none
SavageDrums
The wealthy are determined to kill us all one way or another.
BestUsernameICouldThinkOf
Just stop including Gandhi in those sims. Problem solved.
epithymetic
eXoRainbow
Most of the Civilization won't understand that joke.
kevbot5000
I get why the WarGames references got more upvotes, but seeing as AIs were trained on internet content, Gandhi references have to outweigh WarGames references by a country mile and your upvotes should reflect that.
SuperfluousMeh
https://media4.giphy.com/media/v1.Y2lkPWE1NzM3M2U1cDE4dmplZDAyanN1NHkzNXhjcXNxd2NuenBzbHhxMGkzNWM1aXpodCZlcD12MV9naWZzX3NlYXJjaCZjdD1n/YWkKxeJMibHlS/200w.webp lolno
MissivesFromTheTower
No wonder, we trained them.
WiiShaker
Who, did? You and the mouse in your pocket?
Targe0
Thajurriesexcuesed
Fun fact: The comment above is wrong in two easily verifiable ways...
Comet260
Mind expanding on that?
Thojira
well we, as in humans, didn't directly train them per se. Depending on the type of model we are more involved but it can be summed up that way : We set a goal, the more the IA reachs or get closer to the goal it reinforces certains behaviours / values and doing so should be better the next generations and so one. That is also why training data and its quality is important, because it serves as tests / guide for the IA.
So the IA trains itself, a bit like a tolder learns to walk.
Thojira
After that their is the comment on top, this one I also don't agree, because that would suggest that by example, we nuke each other which isn't the case, because as history showed us, every other time we reach a point where nuke where going to used or order to be used, a human in the chain refused to or took the time to double / triple check. Depending also on how the war game where simulated, depending on the goal and instruction then it was unavoidable to reach that state.
Thajurriesexcuesed
Only two nuclear weapons have ever been used in war. The last one was 81 years ago in Nagasaki, Japan. So, there's history showing that "we" don't use them... Second if you will direct your eyes to the the text in the post, and read it, you will see that it specifically says: "...AI models appear willing to deploy nuclear weapons WITHOUT the same reservations as humans..." So, again it's idiotic to say they learned from us.
MissivesFromTheTower
Not saying they learned from history, but from whatever data or instructions they were provided.
Thajurriesexcuesed
Physics makes nuclear weapons a very efficient and cost effective choice, if you can afford the startup costs.
L0rdinquisit0r
/gallery/p6JpziG/comment/2492397983

OhIfIMust
Dammit, what now...
L0rdinquisit0r
/highlights-from-the-raes-future-combat-air-space-capabilities-summit/">https://www.aerosociety.com/news/highlights-from-the-raes-future-combat-air-space-capabilities-summit/ at bottom the "thought experiment"
https://www.bbc.com/news/technology-65789916 Prof Yoshua Bengio, one of three computer scientists described as the "godfathers" of AI after winning a prestigious Turing Award for their work, said he thought the military should not be allowed to have AI powers at all.
He described it as "one of the worst places where we could put a super-intelligent AI".
loldongs10000
And the non-surprising outcome
https://www.cnn.com/2026/02/25/tech/anthropic-safety-policy-change
SecurityBadger
I am VERY upset to hear/read this. I had hope there was at least one company which would stand up to them.
ExplodingPortaPotty
I mean if the AI is asked to win a war game in the most efficient way possible dropping a nuke is a very logical answer. The only time nuclear weapons were used in war it lead to a quick and total surrender.
Sonicschilidogs
Yeah, and then everyone got them..... If AI thinks nukes are a quick and efficient endgame, they need to update the parameters for what a "win" is.
ChiefRunningFridge
They should tell the AI no you can't use those. They may be used against you but you don't have those and can't use them.
mikeatike
Depends on how "winning" is defined. Any wargame where nuclear war is invoked isnt gonna be a "win" for anyone.
And any wagame that allows the possibility of a successful first strike with no response is broken.
silhouettegundam
That's kinda the fallacy with the understanding of these LLM systems. There are not rules. There is not logic. There is no truth. There is no understanding. That's why it will do things you told it not to do. It imitates logic with models of data with statistic associations of words. It can model what a war game looks like and put in collections of words with appropriate weights.
Frankasti
These are LLMs. Yes, we call them AIs but they are *not* able to reason. They are quite literally predicting the next token (read 'word', or 'syllable') in a sentence given a certain context quantified by a prompt and previous tokens. For the context vector associated with wars, LLMs are trained on historical texts where using the nuclear weapon had japan capitulate rather quickly. There are more texts talking about winning with the bomb than texts explaining the dangers of it. It's just an avrg
Filanwizard
AI is still accurate however if any tech bro claims to have an AGI then they are full if shit. Unless we can convince people to use the term Mass Effect had for the systems like LLMs. VIs Virtual Intelligence.
GottaGetThoseSeedsMorty
Autocomplete on steroids
BoutYeMucker
Or to demonstrate the tech? The war was pretty much over when those bombs were dropped. There is some debate about the decision process on those.
crateo
The reason was to stop the Russian invasion of Japan. By having them surrender to the US, that scenario was avoided.
Hashbrown123
It was almost over, but without nukes even more people would have died ending it.
BoutYeMucker
Thats hard to know. Its definitely heavily debated. Hopefully we never see it again.
ChiefRunningFridge
Every other island was tooth and nail. Down to the last soldier. They were preparing for us to do an invasion of the main islands. They didn't surrender after one of their cities was fully leveled from 1 blast. If they didn't constantly show frankly utterly batshit levels of resolve I'd agree with you. They were literally just scuicidally ramming the planes into the ships. Not saying Japanese are inherently crazy but imperial Japan showed basically unshakable resolve.
ChiefRunningFridge
Afaik even after the second bomb there was parts of the military refusing to surrender. Finally the civilian government got them to back down because they weren't sure if we didn't have enough to just level what was left of their country. 1 bomb might be a wonder weapon. 2 means we can make more. A wanna say we had a 3rd locked and loaded. And yes I've seen the aftermath videos. A really good English teacher made us watch them.
HeresYourSauce
Just have it play tic tac toe against itself for a while.
PTK74
You assume that non-movie AI understands fucking anything. It doesn’t. AI is like Joe rogan, it just repeats what it has seen. Without any thought.
epicfail331
How about a nice game of Chess?
meme2theextreme
https://media4.giphy.com/media/v1.Y2lkPWE1NzM3M2U1cmF2MnR1ODhjbmt0YWdsaW9kM3J5dHd3cjVrYm5vcTlyeGpqYTJqdiZlcD12MV9naWZzX3NlYXJjaCZjdD1n/60FwidZG6R7Mc/200w.webp
NorthmanoftheNorth
Ironically AI is really good at that kind of thing. I'll say it again though, AI doesn't understand or comprehend anything.
Llms seem really good, but it's just googling all the words to say.
HeresYourSauce
What are you saying AI is good at?
hortoSuperHero
Plagiarism
Krashtestdummy
aoshistark
Yeah see... JOSHUA was actually a real Ai intelligent computer that could reason and learn. LLMs are NOT, despite their tech bro branding calling them Ai. They (LLMs) are only guessing words that normally follow previous words based off statistics. It's a very convincing trick, but there is no "i" in "Ai"
RibbleTPibits
Okay… I don’t think you’re wrong about the capability of LLMs… but “A” is the qualifier here. So; if not “I”, then “A” is also meaningless in this context. Artificial as a synonym for Fake, seems an accurate description of the “intelligence” involved. Hence “AI”. No?
aoshistark
AAhh, I like the way you think there.
marsilies
The movie WarGames depicted an AI that could reason and learn. Not only is today's "AI" much dumber (it can't actually reason, only output text that looks like reasoning), but its training is done and off by the point people actually use it.
Targe0
Yes, and the movie actually triggered the US Government to step up its cybersecurity for the first time.
Whereas today's AI has caused the US to put massive vulnerabilities into their cybersecurity systems.
marsilies
Pete Hegseth is pissed that Anthropic is daring to put limits on what Claude can be used for.
Targe0
He's just not used to running into people willing to take a moral stand because of their ethics.
As it's not something that happens much in Trumpistan 2.0
M4Firefly
if this is what the AI concludes. It implies that it would be the most logical/effective move or the most popular move by its data. just like playing tic tac toe
4Astaroth
The AI isn't concluding anything. It doesn't understand the world around it. The prompts are probably: "What weapon to use for maximum destruction." and that's the answer. The guys on those AIs don't know any other way to destroy an army fast. They have no understanding for strategy, numbers or anything at all.
marsilies
Even if the prompting is better constructed and it's actually "roleplaying" through a wargames scenario, that doesn't mean it understands anything. It opts for nuclear strike because there's so much fiction where that happens, it was trained on text that has far more instances of things going nuclear than not, because it not happening is boring fiction.
4Astaroth
I agree completely. I guess the only movie where the machine learns how stupid nukes are is Wargames where the machine learns that nukes are the dumbest answer. But nukes are pretty often the final answer in fiction when humans know it's the end of most of humanity.
ThisNameIsMaybeTaken
AI are dumb as shit, they can't even play a game of chess without hallucinating pieces into existence.
HeresYourSauce
It only implies that so much as you assume AI is a logical or effective tool to find this kind of solution. It absolutely is not.
Out of curiosity I challenged chatgpt to a game of tic tac toe. I won twice. The second time I even made sure to tell it to play optimally. It could not manage it. AI isn't at the point it can reliably win a solved game with the textbook open.
FaeVikingPrincess
No, it's what it thinks is the most likely event to finish what ever prompt they put in it.
MadnerKami
Correct. This is not AI, not by a long shot. It's heuristic algorithms that just calculate the most likely series of events, based off of data from the internet. You know, the place where people share their experiences in Command and Conquer and the like. It doesn't "think". It doesn't have a will. It just throws shit together based off of likelyhood of being together in the first place.
Targe0
It's all just a text calculator.
Using letter associations to predict what will come next.
It can't even understand words, it just knows what letters tend to follow in a string together.
And when given a prompt it just does the calculations to work out what the most logical result would be for that string of letters.
billstranger
Yeah, the amount of comments online to the effect of "just fucking nuke it" in reference to anything at all, like bedbug infestation, or corrupt state legislature, or bad fast food place, or whatever. The "just nuke it" sentiment is being interpreted literally by the dumbdumb RAM-embargoing no money making machine.
Targe0
Or people referencing the first Alien movie.
TomahawkJackson
The AI is also aware that this is a *simulation* - and just like the vast majority of six-year-old children - unless you go to EXCEPTIONAL heroic lengths, they can tell the difference between real and pretend.
So, yeah. Pit the AI against each other in a simulated world with rules that establish that it's a simulation, they're gonna nuke each other because there are no actual externalized costs or consequences.
rezurok
Yeah, no they can't. It's predictive text, not intelligence. AI is a misnomer at best
TomahawkJackson
State of the Art: https://arstechnica.com/ai/2026/02/sixteen-claude-ai-agents-working-together-created-a-new-c-compiler/
TomahawkJackson
You clearly haven't been following the developments in the field since ChatGPT (GPT-3, but with a web interface) was the hot new stuff at the end of 2022. THAT was hopped up predictive text, sure. The introduction of Chain of Thought (Sep '24) and then the ability to use sub-agents (Late '25) has upended that dated paradigm.
Today? Frontier AI Agents can work together autonomously for TWO WEEKS AT A TIME successfully solving novel problems (and have). Get up to speed on the SOTA, fren.
rezurok
Slow down, leave some kool-aid for the rest of us.
HCBailly
Been that way for decades already.
jtthemediocre
Sid Meier was right!
BerryButcher
I mean technically it's quickest solution to global problem, this was a thing with mobs back in the day, when two sides couldn't agree the "negotiator" wiped both parties to reset the board
HamSlamwich
You know that quote is from a video game right? Gandhi definitely did not say that.
Skywatcher16
and the reference is to another video game; civilization. where a quirk of 1990s coding turned into a decades long meme of ghandi being one
Skywatcher16
of the most agressive and nuke happy leaders in the entire series. yey cache overflow!
7hatsBollocks
Whaaaaaat?
PostalHeathen
I'm sure they do. They're referencing the game's tendency to have Gandhi go all nuke-happy.
PostHartmann
threepotatoesinatrenchcoat
He doesn't even say it in the game. Civilisation (the game) just accidentally made him super aggressive instead of the least aggressive, and the internet extrapolated. I think the poster here is just trying to illustrate that electronic programming can do aggressive things, whether or not their creators meant it to be that way
agonarch
For those not in the know it's a bug related to AI aggressiveness, in some versions of the original CIV rival leaders could have an aggressiveness rating from 1-16 (0 being never attack, 16 being attack every time you think you can get an advantage).
Building one of the wonders (united nations?) reduced aggressiveness by 2, which was fine for most leaders. Gandhi, at 1 to begin with, wrapped around past 0... to 255.
That's the kind of sudden, violent shift in personality people remember.
agonarch
255 basically meant he didn't care if he could win- he'd attack any time he thought he could hurt you at all. If he had nukes, he could do that any time he had a nuke, so you'd get nuked. One of my first worlds is a two continent Gandhi/Aztec(me) hellscape.
Though the bug was fixed early enough that many of the devs deny it even existed now (you can test it yourself), Ghandi started to be coded with this flip intentionally, he'd message with "Now it's time for my master plan!" and attack etc.
futureman3000
Gandhi actually did say that, right before saying "bitch!" and moonwalking away. I was there, dude, it happened.
Murdertron5000
"Suck gandhis nuts, bitches"
frankmanhattan
And then everyone clapped
SteersAndQueers
And then Gandhi got shot, pulled the bullet out with his own fingers, and threw it back right between the assassin's eyes. It didn't kill him, but it was probably a bit sore. Not "snowball to the back of the ears" sore, but definitely more sore than clipping the wingmirror of a car as you scoot past it.
shehdbeuebw738373
I don't think he even said it in a video game. A buffer of overflow just made him aggressive in a game
PhailRaptor
That's what happened in Civ 1. In all titles after that it's been maintained on purpose, though the exact form has changed. As of Civ 6, he's very much a pacifist. But should you manage to actually get him to war in the Atomic Era or later, he will not hesitate to drop nukes faster than you can say "nuclear fire".
mithiwithi
to be excruciatingly pedantic about it, an integer overflow rather than a buffer overflow.
shehdbeuebw738373
https://media0.giphy.com/media/v1.Y2lkPWE1NzM3M2U1OWU2Nm05NndjeWEyaWFybDl1MzdnMjl0ZTZldGt1NmswNmU0N3BseSZlcD12MV9naWZzX3NlYXJjaCZjdD1n/1hMk0bfsSrG32Nhd5K/200w.webp