Wikipedia is the greatest information-based compendium and technical accomplishment of mankind. Chat gpt needs it to run. It's the I part of AI, idiot.
i hope someone sends a link to this for the boy to see how hard we are disparaging him in public, just so he'll know theres nothing he can do about it. I mean, his NAME and PHOTO.
imagine dealing with this at work. These idiots think AI is magic. AI needs human written and curated info to learn from. I had to explain this to a semi-technical coworker recently when he trained his first ML model to find anomalies and didn't have any way to verify what those anomalies were. I tried to lead him through the process of doing that but he is on a deadline so whatever unverified data was flagged was somehow good enough. I can only hope it doesn't go into production.
And this isn't even LLMs, this is old school hardcore Machine Learning but non-technical managers put it in the same category as "using AI" and force it into stuff it might not even be able to do.
I thing we are getting to the root of the MAGA problem, people who read things and never question it once it enters the empty space between their ears.
They only never question it if it fits their existing worldview. If it contradicts their worldview, they'll reject it outright, no matter how scientifically accurate it is, based on trustworthy reporting, real-life accounts, etc. It's why Musk has to keep trying to "fix" grok, since even when the LLM they built to bias their views states something contradictory, no matter how true it is, they refuse to believe it.
I miss when tech was as innocent as AskJeeves and MySpace. That is the Computer Science world I studied for. And this one is what I entered after graduation.
I have 'we' and 'wd' as custom search-engine shortcuts for Wikipedia English respectively German since ... don't even know, when did Opera introduced them? 20 years ago?
The human body is a weird machine, in that more use (to a point) actually improves it rather than wearing it out. The fastest way to an early grave is to do nothing.
It's also weirdly wired to crave what also causes an early death, drugs, alcohol, nicotine, fats, sugars. This is just another example, for the most part the body wants to be lazy I know this is all a hold over as, chemicals trigger unintended receptors and unhealthy food was extremely scarce. While exertion cost resources. Humans as a species are weird in this way as the majority of mammals given a steady food supply would do nothing but eat, sleep, and mate
You can easily tell this is just rage bait. If he really thought that, he'd have asked ChatGPT. But he asked the public to get engagement.
And it's super successful, too - you can tell because I don't have a Twitter/X account and don't visit the site, nor do I even know who that guy is. But here I see what he said, thus his bait succeeded.
Imgurians, stop boosting their signal. You're doing exactly what these people want. Apply critical thinking skills you mock others for not having.
If you treat them as a random person on the street being asked a quiz question, it's an impressive ratio. If you treat them as a reference for anything at all, it's pathetically unreliable.
I had a 30 minute argument with a clanker about how I didn't need to pull a permit to pour a non structural concrete pad, i proceeded to insist that what I was doing was structural (things being built on it. it wasn't.) and advised me to email the local government for clarification on the matter. I just wanted to know what the stand off from the original foundation needed to be. I ended up checking my code book myself with a regular find/replace. No permit required, 24 inches from the base.
Yep, that's usually how it goes. These things don't actually have cognition, they're just search engines on steroids being fed through a black box of human-interaction-soup.
And with no concept of objective reality, and with no duty to truth, only with duty to try to keep you engaged. "I want to do Dumb Thing. How do I do Dumb Thing?" "Great idea! Here's exactly how you do this: ... ... ..." "That didn't work. Maybe this is actually a bad idea." "Fantastic observation! This is something no one should ever do, and anyone who tells you how to do it is terrible."
Yeah the gaslighting with endless positivity really doesn't make them any better, but they are meant to be a product to get sold so to self-advertise that way makes business sense. Just not moral or ethical sense.
These people literally believe that these LLMS are programmed with the totality of all human knowledge and not programs that Google stuff and take a guess based on the results
And often guess super wrong be they're designed to grab the most seen/popular results for the things they're asked, rather than the ones with the most sources and backing, because that would be way harder to program. So they don't pull from things like Wikipedia or any other online encyclopedia sites, or science journals or anything like that; they pull from like Quora and Reddit threads, where some dude makes a funny answer and gets a bunch of upvotes, leading to chatGPT confidently telling /1
you that the capital of Albania is Bofadeez, or that planes work because they're afraid of the ground, or all kinds of other dumb stuff (or worse, stuff that *sounds* like it might be right to the kind of person who's looking it up because they don't know about that subject, but is actually really bad, like it telling you to mix certain chemicals that causes unsafe reactions or how to "fix" a car or a PC in a way that'll actually break it worse.
ChatGPT only knows shit because people keep Wikipedia up to date. If Wikipedia went away, so will the most reliable information source that all AI models use for their facts. So those chatbots will get even less reliable if it went away. Because it would only have outdated versions of information from the last time the wiki articles were updated when it harvested the data from them. Meaning, it would lack any corrections and updates to that information.
ChatGPT doesn’t know anything. It literally has no semantic understanding of anything, and hallucinates all the time, and should not be trusted, ever, as a source of fact.
True, it is just a word predictive calculator. And Wikipedia is one of the main sources that it bases its numerical weighting on. Which is why it can hallucinate, because the equation it was working with in the prediction produced the incorrect probability of what word was needed next.
It's not even really "incorrect" probability, since the weighting and prediction isn't based on factuality, but sounding plausibly human. One way to think of it is that the LLM is CONSTANTLY hallucinating, it's always making up what it's saying with zero regard for factuality, and the fact that it's factually correct sometimes is a coincidence due to it being trained on a ton of factually correct text.
It's basically just doing math for how often words appear after the prior word. And then weighs the words before that word for how often both words come before it and so on for a set distance depending on the AI's algorithm. And the weighting starts from the prompt the user writes. That prompt is what initiates the equation. And it uses its training data to work to calculate how often words come after each other.
It's important to note that it doesn't actually "start" with the user prompt. There's the "system prompt" that's hidden from the user that's actually at the start, and the LLM is ingesting the entire chat history (and possibly previous chats with the user) to generate the next string of text.
SpartaWolf117
adam hit me up, I got a great bridge to sell you
CertifiedBonerDonor
Why do books still exist? Doesnt TV render it completely irrelevant
Copperbrat
My neighbor was on a hike a month ago and used ChatGPT to "identify" mushrooms and bones she found. The results were absolutely hilarious.
awholelotofnothin
Wikipedia is the greatest information-based compendium and technical accomplishment of mankind. Chat gpt needs it to run. It's the I part of AI, idiot.
DrunkArchitect
The fact that people don't understand large language models or "Ai" are nothing more than a glorified search engine is terrifying.....
seckzie
Exactly, you could code Wikipedia to print out letter by letter like AI as they are loading and these people would go "woahh, intellergens."
hwatL4bloopy
Wtf yeah, let's cut off it's verifiable data source so that it starts eating its own tail
AnOceanOfStars
Or any person in Idiocracy after the main character emerges from the capsule.
veronicablood
i hope someone sends a link to this for the boy to see how hard we are disparaging him in public, just so he'll know theres nothing he can do about it. I mean, his NAME and PHOTO.
shepahrdjhon
imagine dealing with this at work. These idiots think AI is magic. AI needs human written and curated info to learn from. I had to explain this to a semi-technical coworker recently when he trained his first ML model to find anomalies and didn't have any way to verify what those anomalies were. I tried to lead him through the process of doing that but he is on a deadline so whatever unverified data was flagged was somehow good enough. I can only hope it doesn't go into production.
shepahrdjhon
And this isn't even LLMs, this is old school hardcore Machine Learning but non-technical managers put it in the same category as "using AI" and force it into stuff it might not even be able to do.
RooGryphon
If i had kids they won't know what social media is until they know what critical thinking is and logic
TheUnstoppableWampas
bot account?
awholelotofnothin
Good boy account, maybe. Sycophants and free marketing for billionaires. https://youtu.be/5NfyIpE4zaw?si=VP0LEdI4MZpw1ce-
Dunes8
Because Wikipedia has more human involvement and is more subject to peer review.
Antifalalalala
I thing we are getting to the root of the MAGA problem, people who read things and never question it once it enters the empty space between their ears.
marsilies
They only never question it if it fits their existing worldview. If it contradicts their worldview, they'll reject it outright, no matter how scientifically accurate it is, based on trustworthy reporting, real-life accounts, etc. It's why Musk has to keep trying to "fix" grok, since even when the LLM they built to bias their views states something contradictory, no matter how true it is, they refuse to believe it.
Antifalalalala
Indeed, who watches the watchers.
MyCatHasDiabete
Per Ask Jeeves, you don't see people in WALL-E until like halfway through the movie
shepahrdjhon
I miss when tech was as innocent as AskJeeves and MySpace. That is the Computer Science world I studied for. And this one is what I entered after graduation.
irmonkey
Lycos confirms this.
hsalonen3000
I loathe for the future, where people don't go to Wikipedia anymore..
somebackup
My most common search term is "wiki" followed by whatever. I don't need chat GPT to make up shit
Mithi
I have 'we' and 'wd' as custom search-engine shortcuts for Wikipedia English respectively German since ... don't even know, when did Opera introduced them? 20 years ago?
Mithi
fuuuck I'm old
2009 - "built-in tool for creating and editing search engines"
somebackup
I used to do that too, but gave up on it because my habit was just too hard to break. Ingrained for two decades.
bringbacksocialdistancingplease
Seperate thought: as a person with mobility issues, I’d Fʀᴇᴀᴋɪɴ love a WALL·E puff person lounge…
ABrokenThing
There's a huge difference between needing it and people just laying back and letting a chair do life for them because why do effort
mthrndr01
I, being an incredibly lazy person, would also like one
CallMeCourierSix
Welcome to an early death.
The human body is a weird machine, in that more use (to a point) actually improves it rather than wearing it out. The fastest way to an early grave is to do nothing.
monkeydwolfwood
It's also weirdly wired to crave what also causes an early death, drugs, alcohol, nicotine, fats, sugars. This is just another example, for the most part the body wants to be lazy
I know this is all a hold over as, chemicals trigger unintended receptors and unhealthy food was extremely scarce. While exertion cost resources.
Humans as a species are weird in this way as the majority of mammals given a steady food supply would do nothing but eat, sleep, and mate
akafluffy
wikipedia is still there so AI has something to plagiarize information from other than reddit, duh.
Svartsinn
I asked ChatGPT to find evidence that I am right. This constitutes proof that I am right.
vorodar
You can easily tell this is just rage bait. If he really thought that, he'd have asked ChatGPT. But he asked the public to get engagement.
And it's super successful, too - you can tell because I don't have a Twitter/X account and don't visit the site, nor do I even know who that guy is. But here I see what he said, thus his bait succeeded.
Imgurians, stop boosting their signal. You're doing exactly what these people want. Apply critical thinking skills you mock others for not having.
IamnotarobitIseethenumber2
I'm hoping this person is being sarcastic
RichardPenne
"Google researchers find the best AI model is 69% right"
https://www.businessinsider.com/google-researchers-find-best-ai-model-69-right-2025-12#:~:text=Google%20researchers%20find%20the%20best%20AI%20model%20is%2069%25%20right
Ivain
If you treat them as a random person on the street being asked a quiz question, it's an impressive ratio. If you treat them as a reference for anything at all, it's pathetically unreliable.
JohnWickdidnothingwrong
Nice
mooseablethenok
I had a 30 minute argument with a clanker about how I didn't need to pull a permit to pour a non structural concrete pad, i proceeded to insist that what I was doing was structural (things being built on it. it wasn't.) and advised me to email the local government for clarification on the matter. I just wanted to know what the stand off from the original foundation needed to be. I ended up checking my code book myself with a regular find/replace. No permit required, 24 inches from the base.
Ivain
Yep, that's usually how it goes. These things don't actually have cognition, they're just search engines on steroids being fed through a black box of human-interaction-soup.
The701
And with no concept of objective reality, and with no duty to truth, only with duty to try to keep you engaged.
"I want to do Dumb Thing. How do I do Dumb Thing?"
"Great idea! Here's exactly how you do this: ... ... ..."
"That didn't work. Maybe this is actually a bad idea."
"Fantastic observation! This is something no one should ever do, and anyone who tells you how to do it is terrible."
Ivain
Yeah the gaslighting with endless positivity really doesn't make them any better, but they are meant to be a product to get sold so to self-advertise that way makes business sense. Just not moral or ethical sense.
CatPlanetQueen9000
where does this idiot think it steals all its information from? how brain dead is he?
ABrokenThing
You have no idea. And just when you think you do, people prove to be even dumber.
Neednoggle
Very
Frenchgeek
He offloaded all his thinking to AI. So, he's really safe in case of zombie apocalypse.
algoritham
He's a ChatGPT user, his ability to apply critical thinking atrophies further every day.
OkButWhyWereTheyFilming
I'm so glad this is the top comment
Kotarisu
That would involve critical thinking skills, which are sorely lacking in most of the public.
somnif
Bluecheck'd, so, very.
jayman0123
These people literally believe that these LLMS are programmed with the totality of all human knowledge and not programs that Google stuff and take a guess based on the results
relevantPop3771
And a lot of their non-encyclopedic knowledge is based on reddit, so we're like, fucked.
Gerokeymaster
And often guess super wrong be they're designed to grab the most seen/popular results for the things they're asked, rather than the ones with the most sources and backing, because that would be way harder to program. So they don't pull from things like Wikipedia or any other online encyclopedia sites, or science journals or anything like that; they pull from like Quora and Reddit threads, where some dude makes a funny answer and gets a bunch of upvotes, leading to chatGPT confidently telling /1
Gerokeymaster
you that the capital of Albania is Bofadeez, or that planes work because they're afraid of the ground, or all kinds of other dumb stuff (or worse, stuff that *sounds* like it might be right to the kind of person who's looking it up because they don't know about that subject, but is actually really bad, like it telling you to mix certain chemicals that causes unsafe reactions or how to "fix" a car or a PC in a way that'll actually break it worse.
Targe0
ChatGPT only knows shit because people keep Wikipedia up to date.
If Wikipedia went away, so will the most reliable information source that all AI models use for their facts.
So those chatbots will get even less reliable if it went away.
Because it would only have outdated versions of information from the last time the wiki articles were updated when it harvested the data from them.
Meaning, it would lack any corrections and updates to that information.
yqpqfrdp625772
ChatGPT doesn’t know anything. It literally has no semantic understanding of anything, and hallucinates all the time, and should not be trusted, ever, as a source of fact.
The701
brb, gonna go use an LLM as a replacement for socialization and a therapist https://www.axios.com/2025/12/21/ai-companions-new-imaginary-friend-children-teens
Kotarisu
I describe them as akin to cutting out letters/words from magazines to make ransom letters. You have the pieces, but none of the original context.
Targe0
True, it is just a word predictive calculator.
And Wikipedia is one of the main sources that it bases its numerical weighting on.
Which is why it can hallucinate, because the equation it was working with in the prediction produced the incorrect probability of what word was needed next.
marsilies
It's not even really "incorrect" probability, since the weighting and prediction isn't based on factuality, but sounding plausibly human. One way to think of it is that the LLM is CONSTANTLY hallucinating, it's always making up what it's saying with zero regard for factuality, and the fact that it's factually correct sometimes is a coincidence due to it being trained on a ton of factually correct text.
Targe0
It's basically just doing math for how often words appear after the prior word.
And then weighs the words before that word for how often both words come before it and so on for a set distance depending on the AI's algorithm.
And the weighting starts from the prompt the user writes. That prompt is what initiates the equation. And it uses its training data to work to calculate how often words come after each other.
marsilies
It's important to note that it doesn't actually "start" with the user prompt. There's the "system prompt" that's hidden from the user that's actually at the start, and the LLM is ingesting the entire chat history (and possibly previous chats with the user) to generate the next string of text.
irisewithredeyes
When I hear bosses and superiors saying shit like this, makes me wanna go:
DownUpUpDown
Weird you would call someone so stupid "superior", what does that make you?
cuddleskunk
Here's how you make your point without being a douche: "They aren't your superiors...they just outrank you"
VinnieJonesDiary
...someone who understands words have multiple meanings???
Frederf
A person in an inferior position in the corporate hierarchy?
DownUpUpDown
newb
MyProfileIsIrrelevant
https://media2.giphy.com/media/v1.Y2lkPWE1NzM3M2U1NTIyeG5yem5mMDY2YmxwYTRuYXphZXY2cmVjOTh4MGl5a2FhbXZlcSZlcD12MV9naWZzX3NlYXJjaCZjdD1n/1zSz5MVw4zKg0/200w.webp
couldyounot123
oof buddy
Frederf
I mean, unofficially, yes.