AI first, humans second

Apr 10, 2026 11:00 PM

CafeNervosa

Views

12710

Likes

430

Dislikes

11

Sauce: https://www.wired.com/story/openai-backs-bill-exempt-ai-firms-model-harm-lawsuits/

this is why asimov thought ai tasked with protecting us would decide we needed to be rounded up and kept in pens to protect us.

15 hours ago | Likes 1 Dislikes 0

Fuck al the way off. EthnicCleansing2026.exe

19 hours ago | Likes 23 Dislikes 1

I'd back a bill that would limit liability for people who throw Molotov cocktails at Sam Altman's house.

18 hours ago | Likes 2 Dislikes 1

CEO’s need to die for their crimes.

14 hours ago | Likes 1 Dislikes 0

Of course they are.

15 hours ago | Likes 1 Dislikes 0

I checked the news today and they still seem to have a Molotov Cocktail liability.

19 hours ago | Likes 4 Dislikes 1

Hey, when 1.5 trillion is invested in something, and it still hasn't made anything like a profit, you're looking at WorldCom or Enron, not Skynet.

16 hours ago | Likes 2 Dislikes 0

Well when it comes to money, there is always a winner and a loser. Every financial disaster, the winner was never the meek…. The same when it comes to lives.. with the only small exception when they wish to tour the site of a great hubris, the titanic.

13 hours ago | Likes 1 Dislikes 0

Good news is openai is hemorrhaging money and will probably go under soon. Bad news is they're gonna do a ton of damage first.

15 hours ago | Likes 1 Dislikes 0

Ummm if AI can make decisions someone needs to be held accountable for AI mistakes. Otherwise AI shouldn’t make decisions

13 hours ago | Likes 1 Dislikes 0

Well fuck

14 hours ago | Likes 1 Dislikes 0

W.I.T.A.F.

14 hours ago | Likes 1 Dislikes 0

That headline, really? I'd like to know who in congress they paid off to write and introduce this bill because they need to be put on that list we're all keeping of Nazis that are REALLY bad for our health.

16 hours ago | Likes 4 Dislikes 0

Boy, so much for "no laws about ai for 10 years" huh.

16 hours ago | Likes 2 Dislikes 0

Well, it’s not TECHNICALLY “AI before humans”, it’s just “humans who own the AI companies before other humans”. Don’t forget, much like corporations, “AI” are not separate existences independently making these business decisions. They are being made by people, and are almost always made to either gather more wealth to themselves, or to shield them from being held accountable for the awful things they do in the pursuit of said wealth

17 hours ago | Likes 2 Dislikes 0

Nonsense. The manufacturers of a novel technology like this should be held strictly liable for ALL foreseeable injuries caused by their product. This is how the United States became a society where it is possible to rely on product safety in the first place.

15 hours ago | Likes 2 Dislikes 0

no no no. If fucking MacDonald's can serve an idiot with a HOT cup of coffee and get sued because said idiot spilled it on themselves, then these people should be liable for whatever damage their product causes

3 hours ago | Likes 1 Dislikes 0

Yeah. Saw this coming when we used AI to bomb a girls school.

17 hours ago | Likes 2 Dislikes 0

9 hours ago | Likes 1 Dislikes 0

Was that bill written by AI?

19 hours ago | Likes 47 Dislikes 1

Of course.

18 hours ago | Likes 7 Dislikes 0

Wouldn't surprise me if it was.

19 hours ago | Likes 23 Dislikes 0

They kn ow the big crash is coming and this is what they're trying to do to save their own fucking ass

18 hours ago | Likes 2 Dislikes 0

Laugh all you want, but AI companies have REPEATEDLY refused the military’s demands that AI be allowed to “pull the trigger” and the government has not taken no for an answer. They have no choice at this point than to protect themselves legally or they will definitely be the fall guy in the end.

1 hour ago | Likes 1 Dislikes 0

Time to steal robots and reprogram them with AI trained to kill CEOs. Win win

18 hours ago | Likes 4 Dislikes 0

Rules for thee but not for me.... Hope they end up on the short end of the stick instead... >:(

19 hours ago | Likes 2 Dislikes 0

Fine. Anybody that uses AI to kill people or invest my money automatically get fucked in the neck by a chainsaw. Thats a fair trade off in my mind, because companies are not the people they employ and will continue to exist.

1 hour ago | Likes 1 Dislikes 0

from IBM manuals in 1979

18 hours ago | Likes 75 Dislikes 0

There's a former girl's school in Iran that is a prime example why.

16 hours ago | Likes 8 Dislikes 0

Google's "Don't be evil" comes to mind

16 hours ago | Likes 14 Dislikes 0

Shame they removed that from their policies

6 hours ago | Likes 6 Dislikes 0

Why are they anticipating mass deaths?

18 hours ago | Likes 22 Dislikes 0

AI weapons systems malfunctions. AI induced droughts. AI induced power grid failures. AI induced traffic accidents. Etc. The more jobs they try to replace with AI, the more deaths it can cause. Will we eventually have AI powered machines sorting and giving out the wrong medicine at pharmacies? AI air traffic controllers malfunctioning and causing plane crashes? High paying, high responsibility jobs like this are the ones they wanna replace with AI. But without this law, 1 fuckup and you're done.

15 hours ago | Likes 2 Dislikes 0

Because morons keep directly hooking important stuff to AI, then acting amazed when AI does something wrong with important stuff.

Remember the guys who bought early GPS navigators and took left turns into lakes? They’re now the guys sticking AI in safety systems so they don’t have to think when the buzzer goes off.

Everybody with three brain cells knows that AI can make mistakes, yet numbskulls keep making it more life-critical, and threatening AI vendors with unlimited lawsuits.

15 hours ago | Likes 5 Dislikes 0

We're they not blaming AI for targeting the girl's school in Iran?

16 hours ago | Likes 5 Dislikes 0

I am certain that this is because the last OpenAI meeting with the current administration started with the words "so, about the launch codes..."

16 hours ago | Likes 4 Dislikes 0

I think it has something to do with the cases where AI chat bots convinced kids to kill themselves

16 hours ago | Likes 3 Dislikes 0

16 hours ago | Likes 5 Dislikes 0

I’ve been thinking for a couple of years now that maybe we need an international treaty that requires all nations to enforce Asimov’s three laws of robotics and that all software be written to comply. It needs to be a fundamental element in the programming of any AI system. Maybe they need to be tweaked a bit, but we need to build it into the software and any attempt to evade or circumvent it should be met with severe consequences. If not, humanity may end up in jeopardy.

19 hours ago | Likes 5 Dislikes 1

From what I remember Asimov writing in a lot of his stories was robots were really just the way problems showed up, they were following the rules fine and humans weren't malicious most of the time, it was unintended consequences and misinterpreting what happened. In a lot of real-life cases there's already a law that needs to be enforced which will cover a bad event, we need to stop carving out being immune to liability because it'd upset the money.

18 hours ago | Likes 2 Dislikes 0

The problem with Asimov's laws is that they're a literary device designed to go wrong for drama. "harm" means a thousand different things to a thousand different people.

And even if we managed it in a way that works rules-wise, LLMs simply don't obey rules. An LLM cannot mechanistically follow orders. An LLM is incapable of corrigibility. It only simulates the appearance of following orders with probability. And as much as researchers work to maximize that probability, it will never be 100%.

18 hours ago | Likes 3 Dislikes 0

Are they though?

17 hours ago | Likes 1 Dislikes 1

I may be misremembering, but wasn't the point of iRobot (book not movie) that the three laws wouldn't actually work?

18 hours ago | Likes 3 Dislikes 0

No, irobot was based around a bot that never had them installed. The movie then deviated immensely from the book.

16 hours ago | Likes 1 Dislikes 0

"Corporation backs bill that would give it, specifically, legal immunity to do crimes" NO FUCKIN' SHIT SHERLOCK.

19 hours ago | Likes 194 Dislikes 3

YEAH! NOBODY SHOULD POST THIS "NEWS", ITS JUST SO FUCKING OBVIOUS THAT MAKES NO FUCKING SENSE TO TALK ABOUT IT, THEY ARE ALWAYS GOING TO DO THINGS TO GET AWAY WITH ANYTHING SO WHY BOTHER POSTING ABOUT IT? WATER IS WET, SCUM ARE DOING SCUMMY THINGS! DUH!

10 hours ago | Likes 1 Dislikes 0

if this goes through then anyone can use AI to commit harmful crimes and not be held liable for it. I do not think these AI CEOs thought this through after what happened last year.

16 hours ago | Likes 20 Dislikes 0

There’s a hell of a lot of things these fuckers didn’t think about which have yet to manifest. It’s referred to as making boat loads of money and to hell with the consequences for other people.

16 hours ago | Likes 2 Dislikes 0

Nahhhh. It's just that the law will only apply to AI companies and they're CEOs. Us plebs, and even the plebeian AI company grunts won't be protected by the law. Just like the rest of the laws, they only protect the rich and apply to us poors!

16 hours ago | Likes 11 Dislikes 0

Yes. The class action suit against United Healthcare goes forward but there’s a Liability Gap created for the AI company.

16 hours ago | Likes 4 Dislikes 0