broReplacedWikipediaWithVibesAndHallucinations

Dec 21, 2025 12:48 PM

DetiMuppet

Views

22484

Likes

547

Dislikes

10

adam hit me up, I got a great bridge to sell you

3 months ago | Likes 17 Dislikes 0

Why do books still exist? Doesnt TV render it completely irrelevant

3 months ago | Likes 1 Dislikes 0

My neighbor was on a hike a month ago and used ChatGPT to "identify" mushrooms and bones she found. The results were absolutely hilarious.

3 months ago | Likes 1 Dislikes 0

Wikipedia is the greatest information-based compendium and technical accomplishment of mankind. Chat gpt needs it to run. It's the I part of AI, idiot.

3 months ago | Likes 33 Dislikes 0

The fact that people don't understand large language models or "Ai" are nothing more than a glorified search engine is terrifying.....

3 months ago | Likes 2 Dislikes 0

Exactly, you could code Wikipedia to print out letter by letter like AI as they are loading and these people would go "woahh, intellergens."

3 months ago | Likes 1 Dislikes 0

Wtf yeah, let's cut off it's verifiable data source so that it starts eating its own tail

3 months ago | Likes 1 Dislikes 0

Or any person in Idiocracy after the main character emerges from the capsule.

3 months ago | Likes 1 Dislikes 0

i hope someone sends a link to this for the boy to see how hard we are disparaging him in public, just so he'll know theres nothing he can do about it. I mean, his NAME and PHOTO.

3 months ago | Likes 1 Dislikes 0

imagine dealing with this at work. These idiots think AI is magic. AI needs human written and curated info to learn from. I had to explain this to a semi-technical coworker recently when he trained his first ML model to find anomalies and didn't have any way to verify what those anomalies were. I tried to lead him through the process of doing that but he is on a deadline so whatever unverified data was flagged was somehow good enough. I can only hope it doesn't go into production.

3 months ago | Likes 1 Dislikes 0

And this isn't even LLMs, this is old school hardcore Machine Learning but non-technical managers put it in the same category as "using AI" and force it into stuff it might not even be able to do.

3 months ago | Likes 1 Dislikes 0

If i had kids they won't know what social media is until they know what critical thinking is and logic

3 months ago | Likes 1 Dislikes 0

bot account?

3 months ago | Likes 7 Dislikes 0

Good boy account, maybe. Sycophants and free marketing for billionaires. https://youtu.be/5NfyIpE4zaw?si=VP0LEdI4MZpw1ce-

3 months ago | Likes 3 Dislikes 0

Because Wikipedia has more human involvement and is more subject to peer review.

3 months ago | Likes 2 Dislikes 0

I thing we are getting to the root of the MAGA problem, people who read things and never question it once it enters the empty space between their ears.

3 months ago | Likes 2 Dislikes 0

They only never question it if it fits their existing worldview. If it contradicts their worldview, they'll reject it outright, no matter how scientifically accurate it is, based on trustworthy reporting, real-life accounts, etc. It's why Musk has to keep trying to "fix" grok, since even when the LLM they built to bias their views states something contradictory, no matter how true it is, they refuse to believe it.

3 months ago | Likes 2 Dislikes 0

Indeed, who watches the watchers.

3 months ago | Likes 1 Dislikes 0

Per Ask Jeeves, you don't see people in WALL-E until like halfway through the movie

3 months ago | Likes 4 Dislikes 0

I miss when tech was as innocent as AskJeeves and MySpace. That is the Computer Science world I studied for. And this one is what I entered after graduation.

3 months ago | Likes 2 Dislikes 0

Lycos confirms this.

3 months ago | Likes 3 Dislikes 0

I loathe for the future, where people don't go to Wikipedia anymore..

3 months ago | Likes 1 Dislikes 0

My most common search term is "wiki" followed by whatever. I don't need chat GPT to make up shit

3 months ago | Likes 2 Dislikes 0

I have 'we' and 'wd' as custom search-engine shortcuts for Wikipedia English respectively German since ... don't even know, when did Opera introduced them? 20 years ago?

3 months ago | Likes 1 Dislikes 0

fuuuck I'm old
2009 - "built-in tool for creating and editing search engines"

3 months ago | Likes 1 Dislikes 0

I used to do that too, but gave up on it because my habit was just too hard to break. Ingrained for two decades.

3 months ago | Likes 1 Dislikes 0

Seperate thought: as a person with mobility issues, I’d Fʀᴇᴀᴋɪɴ love a WALL·E puff person lounge…

3 months ago | Likes 29 Dislikes 2

There's a huge difference between needing it and people just laying back and letting a chair do life for them because why do effort

3 months ago | Likes 18 Dislikes 0

I, being an incredibly lazy person, would also like one

3 months ago | Likes 3 Dislikes 1

Welcome to an early death.

The human body is a weird machine, in that more use (to a point) actually improves it rather than wearing it out. The fastest way to an early grave is to do nothing.

3 months ago | Likes 2 Dislikes 1

It's also weirdly wired to crave what also causes an early death, drugs, alcohol, nicotine, fats, sugars. This is just another example, for the most part the body wants to be lazy
I know this is all a hold over as, chemicals trigger unintended receptors and unhealthy food was extremely scarce. While exertion cost resources.
Humans as a species are weird in this way as the majority of mammals given a steady food supply would do nothing but eat, sleep, and mate

3 months ago | Likes 3 Dislikes 0

wikipedia is still there so AI has something to plagiarize information from other than reddit, duh.

3 months ago | Likes 1 Dislikes 0

I asked ChatGPT to find evidence that I am right. This constitutes proof that I am right.

3 months ago | Likes 2 Dislikes 0

You can easily tell this is just rage bait. If he really thought that, he'd have asked ChatGPT. But he asked the public to get engagement.

And it's super successful, too - you can tell because I don't have a Twitter/X account and don't visit the site, nor do I even know who that guy is. But here I see what he said, thus his bait succeeded.

Imgurians, stop boosting their signal. You're doing exactly what these people want. Apply critical thinking skills you mock others for not having.

3 months ago | Likes 1 Dislikes 0

I'm hoping this person is being sarcastic

3 months ago | Likes 3 Dislikes 1

"Google researchers find the best AI model is 69% right"
https://www.businessinsider.com/google-researchers-find-best-ai-model-69-right-2025-12#:~:text=Google%20researchers%20find%20the%20best%20AI%20model%20is%2069%25%20right

3 months ago | Likes 5 Dislikes 1

If you treat them as a random person on the street being asked a quiz question, it's an impressive ratio. If you treat them as a reference for anything at all, it's pathetically unreliable.

3 months ago | Likes 7 Dislikes 0

Nice

3 months ago | Likes 7 Dislikes 0

I had a 30 minute argument with a clanker about how I didn't need to pull a permit to pour a non structural concrete pad, i proceeded to insist that what I was doing was structural (things being built on it. it wasn't.) and advised me to email the local government for clarification on the matter. I just wanted to know what the stand off from the original foundation needed to be. I ended up checking my code book myself with a regular find/replace. No permit required, 24 inches from the base.

3 months ago | Likes 9 Dislikes 1

Yep, that's usually how it goes. These things don't actually have cognition, they're just search engines on steroids being fed through a black box of human-interaction-soup.

3 months ago | Likes 7 Dislikes 0

And with no concept of objective reality, and with no duty to truth, only with duty to try to keep you engaged.
"I want to do Dumb Thing. How do I do Dumb Thing?"
"Great idea! Here's exactly how you do this: ... ... ..."
"That didn't work. Maybe this is actually a bad idea."
"Fantastic observation! This is something no one should ever do, and anyone who tells you how to do it is terrible."

3 months ago | Likes 5 Dislikes 0

Yeah the gaslighting with endless positivity really doesn't make them any better, but they are meant to be a product to get sold so to self-advertise that way makes business sense. Just not moral or ethical sense.

3 months ago | Likes 3 Dislikes 0

where does this idiot think it steals all its information from? how brain dead is he?

3 months ago | Likes 206 Dislikes 2

You have no idea. And just when you think you do, people prove to be even dumber.

3 months ago | Likes 42 Dislikes 0

Very

3 months ago | Likes 2 Dislikes 0

He offloaded all his thinking to AI. So, he's really safe in case of zombie apocalypse.

3 months ago | Likes 23 Dislikes 0

He's a ChatGPT user, his ability to apply critical thinking atrophies further every day.

3 months ago | Likes 5 Dislikes 0

I'm so glad this is the top comment

3 months ago | Likes 3 Dislikes 0

That would involve critical thinking skills, which are sorely lacking in most of the public.

3 months ago | Likes 3 Dislikes 0

Bluecheck'd, so, very.

3 months ago | Likes 2 Dislikes 0

These people literally believe that these LLMS are programmed with the totality of all human knowledge and not programs that Google stuff and take a guess based on the results

3 months ago | Likes 16 Dislikes 0

And a lot of their non-encyclopedic knowledge is based on reddit, so we're like, fucked.

3 months ago | Likes 7 Dislikes 0

And often guess super wrong be they're designed to grab the most seen/popular results for the things they're asked, rather than the ones with the most sources and backing, because that would be way harder to program. So they don't pull from things like Wikipedia or any other online encyclopedia sites, or science journals or anything like that; they pull from like Quora and Reddit threads, where some dude makes a funny answer and gets a bunch of upvotes, leading to chatGPT confidently telling /1

3 months ago | Likes 3 Dislikes 0

you that the capital of Albania is Bofadeez, or that planes work because they're afraid of the ground, or all kinds of other dumb stuff (or worse, stuff that *sounds* like it might be right to the kind of person who's looking it up because they don't know about that subject, but is actually really bad, like it telling you to mix certain chemicals that causes unsafe reactions or how to "fix" a car or a PC in a way that'll actually break it worse.

3 months ago | Likes 5 Dislikes 0

ChatGPT only knows shit because people keep Wikipedia up to date.
If Wikipedia went away, so will the most reliable information source that all AI models use for their facts.
So those chatbots will get even less reliable if it went away.
Because it would only have outdated versions of information from the last time the wiki articles were updated when it harvested the data from them.
Meaning, it would lack any corrections and updates to that information.

3 months ago | Likes 8 Dislikes 2

ChatGPT doesn’t know anything. It literally has no semantic understanding of anything, and hallucinates all the time, and should not be trusted, ever, as a source of fact.

3 months ago | Likes 13 Dislikes 0

brb, gonna go use an LLM as a replacement for socialization and a therapist https://www.axios.com/2025/12/21/ai-companions-new-imaginary-friend-children-teens

3 months ago | Likes 1 Dislikes 0

I describe them as akin to cutting out letters/words from magazines to make ransom letters. You have the pieces, but none of the original context.

3 months ago | Likes 3 Dislikes 0

True, it is just a word predictive calculator.
And Wikipedia is one of the main sources that it bases its numerical weighting on.
Which is why it can hallucinate, because the equation it was working with in the prediction produced the incorrect probability of what word was needed next.

3 months ago | Likes 5 Dislikes 1

It's not even really "incorrect" probability, since the weighting and prediction isn't based on factuality, but sounding plausibly human. One way to think of it is that the LLM is CONSTANTLY hallucinating, it's always making up what it's saying with zero regard for factuality, and the fact that it's factually correct sometimes is a coincidence due to it being trained on a ton of factually correct text.

3 months ago | Likes 2 Dislikes 0

It's basically just doing math for how often words appear after the prior word.
And then weighs the words before that word for how often both words come before it and so on for a set distance depending on the AI's algorithm.
And the weighting starts from the prompt the user writes. That prompt is what initiates the equation. And it uses its training data to work to calculate how often words come after each other.

3 months ago | Likes 1 Dislikes 1

It's important to note that it doesn't actually "start" with the user prompt. There's the "system prompt" that's hidden from the user that's actually at the start, and the LLM is ingesting the entire chat history (and possibly previous chats with the user) to generate the next string of text.

3 months ago | Likes 1 Dislikes 0

When I hear bosses and superiors saying shit like this, makes me wanna go:

3 months ago | Likes 57 Dislikes 0

Weird you would call someone so stupid "superior", what does that make you?

3 months ago | Likes 2 Dislikes 41

Here's how you make your point without being a douche: "They aren't your superiors...they just outrank you"

3 months ago | Likes 9 Dislikes 0

...someone who understands words have multiple meanings???

3 months ago | Likes 15 Dislikes 0

A person in an inferior position in the corporate hierarchy?

3 months ago | Likes 33 Dislikes 0