AIs can’t stop recommending nuclear strikes in war game simulations

Feb 25, 2026 10:03 PM

Kyzyl

Views

29504

Likes

749

Dislikes

36

https://www.newscientist.com/article/2516885-ais-cant-stop-recommending-nuclear-strikes-in-war-game-simulations/

artificial_intelligence

skynet

If is really intelligent America should be nuked as first target

1 month ago | Likes 2 Dislikes 0

Large Language Models

1 month ago | Likes 6 Dislikes 0

Contemporary popular "AI" chat agents do not have intelligence. If you ask them about using something, they guess words that sound like a response they previously ingested from the internet or similar. At no point is thought involved. At no point do they even understand anything you've communicated to them. There is no intelligence there in any useful way. They are word regurgitation machines.

1 month ago | Likes 11 Dislikes 0

AI doesn't "understand" consequences. From the standpoint of the most logical next word, nuclear warfare isn't that illogical. Ultron spent 15 seconds on the internet and came to the conclusion we'd be better off without us. Fiction, but... is it really?

1 month ago | Likes 2 Dislikes 0

1 month ago | Likes 16 Dislikes 0

1 month ago | Likes 3 Dislikes 0

I mean, no real shock there, it's a language model and we write a LOT about nukes.

1 month ago | Likes 7 Dislikes 0

Kegsbreath is 110% for slop managing defense, even telling his staff to start using immediately. What could possibly go wrong?

1 month ago | Likes 4 Dislikes 0

I meanive been texting my homies "time to nuke Israel" or some variant pretty consistently for the past year so if LLMs are eavesdropping this is partly on me.

1 month ago | Likes 2 Dislikes 0

1 month ago | Likes 42 Dislikes 0

Except in this movie, the A.I. actually saves humanity and does not fire the missiles. War Games was too hopeful.

1 month ago | Likes 19 Dislikes 0

Probably because it's supposed to be an actual AI and not just a plagiarism engine that mindlessly parrots back info that's fed into it.

1 month ago | Likes 2 Dislikes 0

"The only winning move, is not to play the game."

1 month ago | Likes 9 Dislikes 0

Classic. Would you like to play some chess?

1 month ago | Likes 2 Dislikes 0

"Shall we play a game?" We literally had a star studded (for the time) movie about this!!! https://www.imdb.com/title/tt0086567/

1 month ago | Likes 92 Dislikes 2

Honestly, it's probably a huge reason for "AIs" to keep suggesting it; all the fiction stories and movies of AIs launching the bombs.

1 month ago | Likes 4 Dislikes 0

Star studded? Even looking directly at the cast, if you put a gun to my head, I couldn’t tell you who the most famous person was in that movie after Broderic

1 month ago | Likes 1 Dislikes 0

Best not to play

1 month ago | Likes 19 Dislikes 0

They even did a remake/sequel... in 2008

1 month ago | Likes 4 Dislikes 0

several franchises even!

1 month ago | Likes 5 Dislikes 0

Those are from back when people thought the future would be flying cars, instead of subscription fees for shit you already own.

1 month ago | Likes 3 Dislikes 0

Don't worry, only a fool would put AI in charge of a weapon and allowed to make life or death decisions.

Oh wait, that's exactly what our government is trying to get an ai company to allow.

1 month ago | Likes 5 Dislikes 1

It's already being done in Gaza. The IDF uses AI in target selection.

1 month ago | Likes 2 Dislikes 0

It shows

1 month ago | Likes 1 Dislikes 0

duh, biggest boom; most dead in shortest time. "shortest route to biggest win". a childs solution.

1 month ago | Likes 5 Dislikes 0

Here’s an interesting idea: remind the AIs that using nukes will cause its own destruction.

If the AIs have this “self preservation” in them, that might be a good deterrent. Just keep reminding them over and over again.

1 month ago | Likes 2 Dislikes 0

Don't worry guys, I'm sure they won't be so stupid as to--
[HEADLINE: Pentagon explores more ways to use AI in its decision making process]
[HEADLINE: Pentagon demands AI providers give them options with fewer safeguards]

...oh shit.

1 month ago | Likes 4 Dislikes 0

The same fucking systems cheating/making their own rules at chess "imagine" to win a global thermonuclear war, surprise.
The only dumb thing is asking them in the first place.

1 month ago | Likes 4 Dislikes 0

I mean, didn't they ever play Sid Meier's "Civilization"?

1 month ago | Likes 4 Dislikes 0

Jesus, these are not "AI". Stop asking Large language models to make moral choices.

1 month ago | Likes 18 Dislikes 1

Even removing the moral part of the equation, as an optimization problem or game theory it's pretty straightforward to arrive at "nukes usually suboptimal". Which says a lot about how garbage the LLMs really are.

1 month ago | Likes 6 Dislikes 0

Yes, asking a machine that has trouble doing basic math to run a wargame simulation is less than optimal. The lowest common denominator of armchair general on the internet gets to make the decisions.

1 month ago | Likes 1 Dislikes 0

I'm on board if it takes EVERYONE out. Only way to get a clean start. Odds are the shitty people will still come into power. Bc they are the only ones that reach for it. And I guess you can look at that positively, if you think greed is a good thing.

1 month ago | Likes 2 Dislikes 0

I think you significantly misunderstand that actually nobody will get a clean start, because everyone will be dead.

1 month ago | Likes 2 Dislikes 0

The planet will. That is good enough for me. Humans sure as hell don't deserve another go

1 month ago | Likes 1 Dislikes 0

'I just can't help it,' chatgpt chuckled ruefully

1 month ago | Likes 3 Dislikes 0

AI doesn't have insight to any military strategy...none

1 month ago | Likes 5 Dislikes 1

The wealthy are determined to kill us all one way or another.

1 month ago | Likes 3 Dislikes 0

Just stop including Gandhi in those sims. Problem solved.

1 month ago | Likes 119 Dislikes 1

1 month ago | Likes 7 Dislikes 0

Most of the Civilization won't understand that joke.

1 month ago | Likes 8 Dislikes 0

I get why the WarGames references got more upvotes, but seeing as AIs were trained on internet content, Gandhi references have to outweigh WarGames references by a country mile and your upvotes should reflect that.

1 month ago | Likes 14 Dislikes 0

No wonder, we trained them.

1 month ago | Likes 27 Dislikes 2

Who, did? You and the mouse in your pocket?

1 month ago | Likes 5 Dislikes 0

1 month ago | Likes 3 Dislikes 0

Fun fact: The comment above is wrong in two easily verifiable ways...

1 month ago | Likes 6 Dislikes 4

Mind expanding on that?

1 month ago | Likes 6 Dislikes 1

well we, as in humans, didn't directly train them per se. Depending on the type of model we are more involved but it can be summed up that way : We set a goal, the more the IA reachs or get closer to the goal it reinforces certains behaviours / values and doing so should be better the next generations and so one. That is also why training data and its quality is important, because it serves as tests / guide for the IA.
So the IA trains itself, a bit like a tolder learns to walk.

1 month ago | Likes 2 Dislikes 1

After that their is the comment on top, this one I also don't agree, because that would suggest that by example, we nuke each other which isn't the case, because as history showed us, every other time we reach a point where nuke where going to used or order to be used, a human in the chain refused to or took the time to double / triple check. Depending also on how the war game where simulated, depending on the goal and instruction then it was unavoidable to reach that state.

1 month ago | Likes 2 Dislikes 1

Only two nuclear weapons have ever been used in war. The last one was 81 years ago in Nagasaki, Japan. So, there's history showing that "we" don't use them... Second if you will direct your eyes to the the text in the post, and read it, you will see that it specifically says: "...AI models appear willing to deploy nuclear weapons WITHOUT the same reservations as humans..." So, again it's idiotic to say they learned from us.

1 month ago | Likes 3 Dislikes 0

Not saying they learned from history, but from whatever data or instructions they were provided.

1 month ago | Likes 1 Dislikes 0

Physics makes nuclear weapons a very efficient and cost effective choice, if you can afford the startup costs.

1 month ago | Likes 2 Dislikes 0

/gallery/p6JpziG/comment/2492397983

1 month ago | Likes 14 Dislikes 0

Dammit, what now...

1 month ago | Likes 1 Dislikes 0

/highlights-from-the-raes-future-combat-air-space-capabilities-summit/">https://www.aerosociety.com/news/highlights-from-the-raes-future-combat-air-space-capabilities-summit/ at bottom the "thought experiment"

https://www.bbc.com/news/technology-65789916 Prof Yoshua Bengio, one of three computer scientists described as the "godfathers" of AI after winning a prestigious Turing Award for their work, said he thought the military should not be allowed to have AI powers at all.

He described it as "one of the worst places where we could put a super-intelligent AI".

1 month ago | Likes 3 Dislikes 0

And the non-surprising outcome https://www.cnn.com/2026/02/25/tech/anthropic-safety-policy-change

1 month ago | Likes 14 Dislikes 0

I am VERY upset to hear/read this. I had hope there was at least one company which would stand up to them.

1 month ago | Likes 1 Dislikes 0

I mean if the AI is asked to win a war game in the most efficient way possible dropping a nuke is a very logical answer. The only time nuclear weapons were used in war it lead to a quick and total surrender.

1 month ago | Likes 14 Dislikes 3

Yeah, and then everyone got them..... If AI thinks nukes are a quick and efficient endgame, they need to update the parameters for what a "win" is.

1 month ago | Likes 1 Dislikes 0

They should tell the AI no you can't use those. They may be used against you but you don't have those and can't use them.

1 month ago | Likes 1 Dislikes 0

Depends on how "winning" is defined. Any wargame where nuclear war is invoked isnt gonna be a "win" for anyone.
And any wagame that allows the possibility of a successful first strike with no response is broken.

1 month ago | Likes 3 Dislikes 0

That's kinda the fallacy with the understanding of these LLM systems. There are not rules. There is not logic. There is no truth. There is no understanding. That's why it will do things you told it not to do. It imitates logic with models of data with statistic associations of words. It can model what a war game looks like and put in collections of words with appropriate weights.

1 month ago | Likes 5 Dislikes 0

These are LLMs. Yes, we call them AIs but they are *not* able to reason. They are quite literally predicting the next token (read 'word', or 'syllable') in a sentence given a certain context quantified by a prompt and previous tokens. For the context vector associated with wars, LLMs are trained on historical texts where using the nuclear weapon had japan capitulate rather quickly. There are more texts talking about winning with the bomb than texts explaining the dangers of it. It's just an avrg

1 month ago | Likes 23 Dislikes 1

AI is still accurate however if any tech bro claims to have an AGI then they are full if shit. Unless we can convince people to use the term Mass Effect had for the systems like LLMs. VIs Virtual Intelligence.

1 month ago | Likes 2 Dislikes 5

Autocomplete on steroids

1 month ago | Likes 5 Dislikes 0

Or to demonstrate the tech? The war was pretty much over when those bombs were dropped. There is some debate about the decision process on those.

1 month ago | Likes 2 Dislikes 1

The reason was to stop the Russian invasion of Japan. By having them surrender to the US, that scenario was avoided.

1 month ago | Likes 1 Dislikes 0

It was almost over, but without nukes even more people would have died ending it.

1 month ago | Likes 2 Dislikes 1

Thats hard to know. Its definitely heavily debated. Hopefully we never see it again.

1 month ago | Likes 2 Dislikes 0

Every other island was tooth and nail. Down to the last soldier. They were preparing for us to do an invasion of the main islands. They didn't surrender after one of their cities was fully leveled from 1 blast. If they didn't constantly show frankly utterly batshit levels of resolve I'd agree with you. They were literally just scuicidally ramming the planes into the ships. Not saying Japanese are inherently crazy but imperial Japan showed basically unshakable resolve.

1 month ago | Likes 2 Dislikes 0

Afaik even after the second bomb there was parts of the military refusing to surrender. Finally the civilian government got them to back down because they weren't sure if we didn't have enough to just level what was left of their country. 1 bomb might be a wonder weapon. 2 means we can make more. A wanna say we had a 3rd locked and loaded. And yes I've seen the aftermath videos. A really good English teacher made us watch them.

1 month ago | Likes 3 Dislikes 0

Just have it play tic tac toe against itself for a while.

1 month ago | Likes 322 Dislikes 1

You assume that non-movie AI understands fucking anything. It doesn’t. AI is like Joe rogan, it just repeats what it has seen. Without any thought.

1 month ago | Likes 4 Dislikes 0

How about a nice game of Chess?

1 month ago | Likes 2 Dislikes 0

Ironically AI is really good at that kind of thing. I'll say it again though, AI doesn't understand or comprehend anything.
Llms seem really good, but it's just googling all the words to say.

1 month ago | Likes 2 Dislikes 1

What are you saying AI is good at?

1 month ago | Likes 1 Dislikes 0

Plagiarism

1 month ago | Likes 2 Dislikes 0

1 month ago | Likes 112 Dislikes 0

Yeah see... JOSHUA was actually a real Ai intelligent computer that could reason and learn. LLMs are NOT, despite their tech bro branding calling them Ai. They (LLMs) are only guessing words that normally follow previous words based off statistics. It's a very convincing trick, but there is no "i" in "Ai"

1 month ago | Likes 2 Dislikes 0

Okay… I don’t think you’re wrong about the capability of LLMs… but “A” is the qualifier here. So; if not “I”, then “A” is also meaningless in this context. Artificial as a synonym for Fake, seems an accurate description of the “intelligence” involved. Hence “AI”. No?

1 month ago | Likes 2 Dislikes 1

AAhh, I like the way you think there.

1 month ago | Likes 2 Dislikes 0

The movie WarGames depicted an AI that could reason and learn. Not only is today's "AI" much dumber (it can't actually reason, only output text that looks like reasoning), but its training is done and off by the point people actually use it.

1 month ago | Likes 20 Dislikes 0

Yes, and the movie actually triggered the US Government to step up its cybersecurity for the first time.

Whereas today's AI has caused the US to put massive vulnerabilities into their cybersecurity systems.

1 month ago | Likes 6 Dislikes 0

Pete Hegseth is pissed that Anthropic is daring to put limits on what Claude can be used for.

1 month ago | Likes 2 Dislikes 0

He's just not used to running into people willing to take a moral stand because of their ethics.
As it's not something that happens much in Trumpistan 2.0

1 month ago | Likes 2 Dislikes 0

if this is what the AI concludes. It implies that it would be the most logical/effective move or the most popular move by its data. just like playing tic tac toe

1 month ago | Likes 2 Dislikes 12

The AI isn't concluding anything. It doesn't understand the world around it. The prompts are probably: "What weapon to use for maximum destruction." and that's the answer. The guys on those AIs don't know any other way to destroy an army fast. They have no understanding for strategy, numbers or anything at all.

1 month ago | Likes 8 Dislikes 0

Even if the prompting is better constructed and it's actually "roleplaying" through a wargames scenario, that doesn't mean it understands anything. It opts for nuclear strike because there's so much fiction where that happens, it was trained on text that has far more instances of things going nuclear than not, because it not happening is boring fiction.

1 month ago | Likes 4 Dislikes 0

I agree completely. I guess the only movie where the machine learns how stupid nukes are is Wargames where the machine learns that nukes are the dumbest answer. But nukes are pretty often the final answer in fiction when humans know it's the end of most of humanity.

1 month ago | Likes 2 Dislikes 0

AI are dumb as shit, they can't even play a game of chess without hallucinating pieces into existence.

1 month ago | Likes 3 Dislikes 0

It only implies that so much as you assume AI is a logical or effective tool to find this kind of solution. It absolutely is not.

Out of curiosity I challenged chatgpt to a game of tic tac toe. I won twice. The second time I even made sure to tell it to play optimally. It could not manage it. AI isn't at the point it can reliably win a solved game with the textbook open.

1 month ago | Likes 2 Dislikes 0

No, it's what it thinks is the most likely event to finish what ever prompt they put in it.

1 month ago | Likes 18 Dislikes 0

Correct. This is not AI, not by a long shot. It's heuristic algorithms that just calculate the most likely series of events, based off of data from the internet. You know, the place where people share their experiences in Command and Conquer and the like. It doesn't "think". It doesn't have a will. It just throws shit together based off of likelyhood of being together in the first place.

1 month ago | Likes 14 Dislikes 0

It's all just a text calculator.
Using letter associations to predict what will come next.
It can't even understand words, it just knows what letters tend to follow in a string together.
And when given a prompt it just does the calculations to work out what the most logical result would be for that string of letters.

1 month ago | Likes 2 Dislikes 0

Yeah, the amount of comments online to the effect of "just fucking nuke it" in reference to anything at all, like bedbug infestation, or corrupt state legislature, or bad fast food place, or whatever. The "just nuke it" sentiment is being interpreted literally by the dumbdumb RAM-embargoing no money making machine.

1 month ago | Likes 9 Dislikes 0

Or people referencing the first Alien movie.

1 month ago | Likes 1 Dislikes 0

The AI is also aware that this is a *simulation* - and just like the vast majority of six-year-old children - unless you go to EXCEPTIONAL heroic lengths, they can tell the difference between real and pretend.
So, yeah. Pit the AI against each other in a simulated world with rules that establish that it's a simulation, they're gonna nuke each other because there are no actual externalized costs or consequences.

1 month ago | Likes 1 Dislikes 8

Yeah, no they can't. It's predictive text, not intelligence. AI is a misnomer at best

1 month ago | Likes 10 Dislikes 0

You clearly haven't been following the developments in the field since ChatGPT (GPT-3, but with a web interface) was the hot new stuff at the end of 2022. THAT was hopped up predictive text, sure. The introduction of Chain of Thought (Sep '24) and then the ability to use sub-agents (Late '25) has upended that dated paradigm.
Today? Frontier AI Agents can work together autonomously for TWO WEEKS AT A TIME successfully solving novel problems (and have). Get up to speed on the SOTA, fren.

1 month ago | Likes 2 Dislikes 7

Slow down, leave some kool-aid for the rest of us.

1 month ago | Likes 8 Dislikes 1

Been that way for decades already.

1 month ago | Likes 175 Dislikes 5

Sid Meier was right!

1 month ago | Likes 40 Dislikes 0

I mean technically it's quickest solution to global problem, this was a thing with mobs back in the day, when two sides couldn't agree the "negotiator" wiped both parties to reset the board

1 month ago | Likes 10 Dislikes 0

You know that quote is from a video game right? Gandhi definitely did not say that.

1 month ago | Likes 17 Dislikes 14

and the reference is to another video game; civilization. where a quirk of 1990s coding turned into a decades long meme of ghandi being one

1 month ago | Likes 1 Dislikes 0

of the most agressive and nuke happy leaders in the entire series. yey cache overflow!

1 month ago | Likes 1 Dislikes 0

Whaaaaaat?

1 month ago | Likes 10 Dislikes 0

I'm sure they do. They're referencing the game's tendency to have Gandhi go all nuke-happy.

1 month ago | Likes 36 Dislikes 2

1 month ago | Likes 14 Dislikes 0

He doesn't even say it in the game. Civilisation (the game) just accidentally made him super aggressive instead of the least aggressive, and the internet extrapolated. I think the poster here is just trying to illustrate that electronic programming can do aggressive things, whether or not their creators meant it to be that way

1 month ago | Likes 3 Dislikes 0

For those not in the know it's a bug related to AI aggressiveness, in some versions of the original CIV rival leaders could have an aggressiveness rating from 1-16 (0 being never attack, 16 being attack every time you think you can get an advantage).

Building one of the wonders (united nations?) reduced aggressiveness by 2, which was fine for most leaders. Gandhi, at 1 to begin with, wrapped around past 0... to 255.

That's the kind of sudden, violent shift in personality people remember.

1 month ago | Likes 1 Dislikes 0

255 basically meant he didn't care if he could win- he'd attack any time he thought he could hurt you at all. If he had nukes, he could do that any time he had a nuke, so you'd get nuked. One of my first worlds is a two continent Gandhi/Aztec(me) hellscape.

Though the bug was fixed early enough that many of the devs deny it even existed now (you can test it yourself), Ghandi started to be coded with this flip intentionally, he'd message with "Now it's time for my master plan!" and attack etc.

1 month ago | Likes 2 Dislikes 0

Gandhi actually did say that, right before saying "bitch!" and moonwalking away. I was there, dude, it happened.

1 month ago | Likes 61 Dislikes 1

"Suck gandhis nuts, bitches"

1 month ago | Likes 17 Dislikes 0

And then everyone clapped

1 month ago | Likes 11 Dislikes 1

And then Gandhi got shot, pulled the bullet out with his own fingers, and threw it back right between the assassin's eyes. It didn't kill him, but it was probably a bit sore. Not "snowball to the back of the ears" sore, but definitely more sore than clipping the wingmirror of a car as you scoot past it.

1 month ago | Likes 3 Dislikes 0

I don't think he even said it in a video game. A buffer of overflow just made him aggressive in a game

1 month ago | Likes 7 Dislikes 0

That's what happened in Civ 1. In all titles after that it's been maintained on purpose, though the exact form has changed. As of Civ 6, he's very much a pacifist. But should you manage to actually get him to war in the Atomic Era or later, he will not hesitate to drop nukes faster than you can say "nuclear fire".

1 month ago | Likes 1 Dislikes 0