Conversational AI

Aug 22, 2025 10:01 AM

zombiejedediah

Views

33874

Likes

828

Dislikes

28

"Say something that sounds like an answer" machine is a mouthful, but it's a better name than "Artificial Intelligence".
Let's see what happens when they start realising it's all hype and there aren't any really good business models out there (if there were, we would by now have found them).

artificial_intelligence

It's like "Tell me more about what I WANT to hear, and feel free to use artistic license."

7 months ago | Likes 2 Dislikes 0

I personally describe it as "generating the most likely answer", and a good chunk of the time, that's also the correct answer. Not Google's though.

7 months ago | Likes 1 Dislikes 0

Fundamentally people do not understand it. Fundamentally many people working on AI in the field do not understand it. People utilizing it are mostly dumb. Inconsistent results and behaviour are not usable, not production ready

7 months ago | Likes 2 Dislikes 1

AI is a Chinese room.

7 months ago | Likes 2 Dislikes 0

I hate that you can't actually have a conversation with it. As a conversation gets longer it drops details and completely makes shit up on its own.

7 months ago | Likes 2 Dislikes 0

There is no such thing as a.i.

All we have are chatbots. Chatbots are a Chinese-box, not a thinking machine.

7 months ago | Likes 3 Dislikes 1

LLM != AI

Corporate America: "I don't understand nerd math."

7 months ago | Likes 2 Dislikes 0

It's like a bird that knows how to talk like people but not know what it's saying

7 months ago | Likes 2 Dislikes 0

It isn’t all hype, though, and there are definite business uses. As for AI understanding, you probably want to watch this, as it’s by someone who is very likely more knowledgeable about AI than you are: https://youtu.be/6fvXWG9Auyg

7 months ago | Likes 1 Dislikes 0

"Over-complicated autocorrect, and equally as reliable."

7 months ago | Likes 1 Dislikes 0

Human speech simulator

7 months ago | Likes 2 Dislikes 0

Not surprised there is no date on that wall of text. Things have changed in two and a half years.

7 months ago | Likes 2 Dislikes 1

LLMs are designed to keep you engaged but there is a pattern and it gets boring with time. For legal purposes, they end the conversation if you venture into grey areas. They are designed to tell you what you want to hear, unless you're asking about a 'protected' subject like Israel, then they change into a snake oiled politician fighting tooth and nail for their life to stay in office.

7 months ago | Likes 16 Dislikes 4

So true. I asked ChatGPT some probing questions about Republican connections to Nazi philosophy, and the prevarication was intense.

7 months ago | Likes 1 Dislikes 0

LLMs are awful, and only getting worse.

7 months ago | Likes 2 Dislikes 0

It started with predicting the word you were typing, then it tried to be clever and predict the next word before you type it, and now it's trying to predict the next paragraph from an initial 'prompt'. It still doesn't understand any of that, we just think it does because all too often we don't really test it.

7 months ago | Likes 2 Dislikes 0

Yep, this is essentially how it works. You can get it to be more accurate by being really precise with your question, like "give me an answer to this question which is based on information found in real published and peer reviewed articles and make sure to quote from and cite those sources in your answer". And thise kind of parameters can be baked into the system so the end user doesn't have to define the parameters so tightly themselves. But fundamentally, yes, this description is accurate.

7 months ago | Likes 4 Dislikes 0

Y'all say this like it's a revelation, but... do you folks just not know how this shit works?

7 months ago | Likes 5 Dislikes 1

A LOT of people see false info given out by "AI" bots and assume it's deliberately misleading. I've seen posts on Imgur even claiming that AI tries to lie.

7 months ago | Likes 4 Dislikes 0

I've limited myself to using chatGPT for D&D purposes only (character backstory starting points, item generation, puzzle inspiration) and Gemini for character/NPC art that I can't accurately do with heroforge. It has been very useful when I get writers block as a DM. But I am ever thankful this shit wasn't around when I was in grad school

7 months ago | Likes 3 Dislikes 0

It's a Chatbot. We've had them for over 30 years. Hell, BonziBuddy was more useful than ChatGPT.

7 months ago | Likes 5 Dislikes 0

Exactly. It's a chatbot with auto complete built in and lots of experience as a chatbot, but that's it. At any point one of them could trip an auto complete function and start spitting out "A/S/L? XO BB, click here to see my n4ughty p1cs" because that's all it is. It doesn't think, it auto completes the next part of the conversation.

7 months ago | Likes 4 Dislikes 0

Parroting BS you read on the internet? Have you actually USED any AI for something other than chatting? Even the makers of the AI admit they don't fully understand how it works, and that it's NOT just predicting what word comes next. It can solve problems it has never seen before. Just ask it. Something simple like creating a formula to do something in Excel or Google Sheets after you describe the sheet to it. It has to actually understand the design in order to answer you correctly.

7 months ago | Likes 1 Dislikes 3

"Even makers of AI admit they don't fully understand it"

'Mysterious Ways' for ones and zeroes.

7 months ago | Likes 2 Dislikes 0

Idiot billionaires don't understand how the tech they invest in works, who was surprised by this? Nobody. You don't know how it works, also not surprising. But don't assume it's some incomprehensible magic mystery box just because your brain can't wrap around it.

Yes, i have and no it can't. It's a special auto complete with lots of information to pull from, that doesn't mean it understands anything, only the pattern of words it produces. There's not even anything to be mysterious about.

7 months ago | Likes 3 Dislikes 0

It isn't artificial - all of its content is cribbed from people who aren't being paid or credited. It isn't intelligent - it's just making stuff up. And it is destroying our electrical grid and water supply

7 months ago | Likes 7 Dislikes 4

So… is it making up content, or stealing content?

Kinda seems like an either/or sort of assertion, here.

7 months ago | Likes 1 Dislikes 2

That isn't my problem.

7 months ago | Likes 2 Dislikes 0

Especially infuriating that center folk keep using it against Trumpers not realizing that their nobler intentions doesn't eradicate the cost of using AI. Or worse, they know the cost is there but think it's acceptable if they do it.

7 months ago | Likes 3 Dislikes 1

I gotta argue that taking actual human intelligence and parroting it is very artificial by definition and as much as I like your enthusiasm what you're saying isn't holding water.

7 months ago | Likes 3 Dislikes 2

Your inability to understand what I'm saying isn't my problem.

7 months ago | Likes 2 Dislikes 0

I used to work with incompetent management that could give responses that sounded like answers but were useless. So I can see why management like that cannot fathom the uselessness of AI and LLMs.

7 months ago | Likes 11 Dislikes 0

Then you and they are using it for the wrong task. It is not a lawyer. It is not a psychologist. It doesn't know right from wrong. It actually sucks at the thing directly in its name - language.

Try it for IT and data tasks. That is where it excels, and that is what makes it super valuable. Notebook LLM is extremely useful, for example. I put all my technical docs on a subject in one Notebook LLM and then my team can open it and ask questions they would normally ask me. What a time saver.

7 months ago | Likes 1 Dislikes 1

This feels like an outdated conversation at this point. While AI can still hallucinate, most modern services include links citing their sources which you can click through on, in order to verify that what it's saying is accurate.

Most of the time though I'm not looking for facts like it's Wikipedia. I'm looking for working code snippets, or basic ideas like names for fantasy characters or a list of types of encounters that might happen in a medieval city.

7 months ago | Likes 15 Dislikes 5

That depends on the prompt. Yes, it provides sources for AI generated *search results,* but conversations don't. What's stupid is that more often than not the search results ARE STILL WRONG. It can look at things but not necessarily interpet them correctly.

I've searched up questions re: games and it'll offer up *completely* wrong answers and the links are to other game guides, lol.

7 months ago | Likes 6 Dislikes 0

All that being said, I don't think OP is trying to shit on AI on the whole. It's just saying "stop expecting it to be smart. Stop claiming it's trying to lie or mislead you. It's just predictive text."

7 months ago | Likes 5 Dislikes 0

OPs wording is very "kids these days". I agree, AQI has given me terrible answers. I do use it as a first swing search when I just don’t know how to phrase something to Google to get Google to give me real results. It does a good job of laying something out and giving me options to continue research.

7 months ago | Likes 1 Dislikes 0

But so far any app that integrates it as a way to write a paper or to have a conversation it’s just slop

7 months ago | Likes 1 Dislikes 0

I agree, @OP is doing a "kids these days" and lying.

7 months ago | Likes 6 Dislikes 6

OP thinks @UsuallyARabbit either has never used "conversational AI", or believes it's an intelligent robot that's answering him.

7 months ago | Likes 4 Dislikes 1

I think it's the opposite. They're pointing out it has its uses, and to call it "AI that is lying to you" is wrong. You have to bear in mind the limitations and understand it is not true artificial intelligence, instead of braying that it's deliberately misleading. "Remember, it's just a language prediction model" is the takeaway, imo.

7 months ago | Likes 4 Dislikes 0

You're taking a more tempered conclusion. I agree we should be wary of AGI. And its not true AI, neither hard or soft. But "AI is intentionally lying to you" isn't true either. The model may hallucinate, they all do, ever dev I know who is tits deep in building a corporate model will be the first to tell you AGI is a mess and terrible. That's not to say it doesn't have used. Treat it like aughts Wikipedia, recheck what it says.

7 months ago | Likes 1 Dislikes 0

Yeah that's what I'm saying. It's not lying, it's just kinda stupid atm.

7 months ago | Likes 1 Dislikes 0

It did win Dumf two elections...

7 months ago | Likes 1 Dislikes 0

Is milk more hydrating than water? "Yes."
Is drinking water the best for hydration? "Yes."

(because there are papers that claim both of these things)

7 months ago | Likes 9 Dislikes 1

Honestly it sure seems to provide the necessary nuance to this answer:

7 months ago | Likes 9 Dislikes 1

This also happened before AI when you googled something. I remember googling if drinking water caused cancer, and the answer was that of course it does! So the trick is to avoid asking yes/no questions because if you do it'll always find something that says yes.

7 months ago | Likes 5 Dislikes 0

(And why would I google if drinking water causes cancer, you may ask? Well, because I couldn't think of a more stupid question.)

7 months ago | Likes 4 Dislikes 0

Even that suggests it's going to those papers, finding those claims, and reporting them. LLMs don't even do that. They have their training data as a base to work from and make sentences that look like they'd be a good response. As soon as a reply is given, the rationale for the reply is gone - it was a weighted number generator, not a searched answer. It's the difference between reading every paper once but taking no notes, vs actually citing papers.

7 months ago | Likes 3 Dislikes 1

Chatgpt does use the sources. And cites all the sources it uses

7 months ago | Likes 2 Dislikes 0

Perplexity literally provides links you can visit. It is going to those papers, finding those claims, and reporting them.

7 months ago | Likes 3 Dislikes 0

That's indeed how it works, which means LLM's are useless for most purposes they're currently being advertised and sold for by the tech industry. You can't blame the average person for falling for false advertising. When Google displays LLM-generated drivel at the top of their search results, with an "AI Summary" headline, of course people are going to assume that the AI truthfully summed up the gist of all search results.

7 months ago | Likes 57 Dislikes 3

A friend of mine uses it to make up fictional animals and tells it to try and biologically explain them. That works because it's anyway not supposed to create any real useful information.

The thing is factually just a really elaborate talkative Furby who pretends to have a PHD, that's it.

7 months ago | Likes 11 Dislikes 1

Before AI, Google used to have "highlighted answers" at the top of each search, and I don't know how it did it but they were always reliably WRONG. So I guess they wanted to maintain the tradition.

7 months ago | Likes 15 Dislikes 0

Google also used to "do no evil", so 🤷‍♂️

7 months ago | Likes 7 Dislikes 0

Yeah but that was not the kind of tradition that is profitable to maintain.

7 months ago | Likes 2 Dislikes 0

What about the "logic" models? (Serious question.) Reflecting, the leap from "what's the next word" to "what's the next (mathematically) logical step" seems small—still fraught, but closer to the architecture/logic the system is based on. Curious y'all's thoughts.

7 months ago | Likes 2 Dislikes 1

Is there somewhere we can read up more on this? I've only heard of language predictors atm.

7 months ago | Likes 2 Dislikes 0

Basically giving the LLM (or AI or machine learning algorithm) logical operators to do formal verification: https://www.zdnet.com/article/how-logic-can-help-ai-models-tell-more-truth-according-to-aws/ Like a mathematical proof!

7 months ago | Likes 2 Dislikes 0

Interesting, thanks! It looks like the main issue will still be the veracity of the information from which the AI is pulling, esp. in terms of the "always/never" logic. When you're sampling from the sum total of all (online) human knowledge, it is going to be really difficult to sift truth from reality.

But that's the problem we face, too. People posting wrong info, fighting over sources--I mean, we can say "if there's 20 studies on pubmed and they all have the same result, then this treatment-

7 months ago | Likes 1 Dislikes 0

for this disease is very likely legitimate" but on the whole it can be kind of a crapshoot. And this (what this AI is now doing) looks closer to human thought in terms of weighting possibilities mathematically.

I'd be interested in knowing *how* it weights them. Does it value some sources more highly than others? Does it look at conflicting sources and take into account the source's general trustworthiness vs. number of conflicts on either side?

It might be an interesting window into how WE

7 months ago | Likes 1 Dislikes 0

weigh our information, too.

7 months ago | Likes 1 Dislikes 0

(truth from reality... I was tired, clearly)

7 months ago | Likes 1 Dislikes 0

Wonderful that we're making sure people know this. ChatGPT, Claude, Copilot, etc., do not have logic engines. They do not have math engines. They do not have rule engines. They do not "know" things. They do not understand that they are lying when they hallucinate. They are fancy autocorrects that can be really good at some things in narrow areas when helping an expert but you absolutely need someone who knows the answer making sure they are right to use them for anything important.

7 months ago | Likes 18 Dislikes 3

That’s not strictly true… the “logic engine” models do exist, and some tools can use them, but they’re expensive and difficult to build and often highly-specialized, so they don’t make such frequent appearances as the basic LLMs.

To use time time-honored car analogy, the current situation is that we’re looking at a go-kart track, watching the karts drive around, and people think that’s the entirety of motor vehicles. Trucks and race cars still exist, but we just rarely see them.

7 months ago | Likes 1 Dislikes 1

Logic engine models exist, but ChatGPT, Claude, Copilot, etc. don't have them, as OP stated. All those are LLMs.

7 months ago | Likes 1 Dislikes 1

You are incorrect. Try using it for something like IT tasks. There IS DEFINITELY some logic working in there, but nowhere near AGI. Find any source code or script on the web that does something useful and then ask AI to explain it to you. It's an amazing tutor. Then ask it to make a change to do something else random. Claude does this with precision and faster than I can type the question. Autocorrect can't do that. Predicting what word comes next can't do that.

7 months ago | Likes 4 Dislikes 1

Which it then sucks all the more then that they are largely utilized by laymen who want to pose it questions with their preferred answer already in mind.

7 months ago | Likes 4 Dislikes 0

Main issue I see with AI chatbots is that they don't know how to say "I don't know" or "I'm not sure". They just reliably spit out whatever sounds closest to an answer they can produce, without any hint about how confident they were on it. They're like that brother-in-law at family meetings that has answers for everything, sounds very smart, but is just making up most of it.

7 months ago | Likes 12 Dislikes 0

I make mine preface each answer with a confidence rating, and have it cite sources.

7 months ago | Likes 4 Dislikes 0

Oh, cool idea!

7 months ago | Likes 3 Dislikes 0

May I ask, how do you phrase that? I'd like to try.

7 months ago | Likes 3 Dislikes 0

Go into Customize ChatGPT, then under What traits should ChatGPT have:

"Precede the beginning of each answer with a confidence rating from 0 to 100%.

Cite sources when you can in MLA style where possible. Include the URLs in plain text so they can be copy/pasted easily."

7 months ago | Likes 6 Dislikes 0

Meh, I'd have requested them in BibTeX

7 months ago | Likes 1 Dislikes 0

It can be influenced, though. I remember one time I chastized it for boldly stating something that was false and then it started giving me abysmal (like 0%) confidence ratings a lot more frequently lol. Guess I damaged my bot's self-esteem 😆

7 months ago | Likes 5 Dislikes 0

GOOD.

7 months ago | Likes 1 Dislikes 0

Yup. People dont know how to use it and get far worse results. Like a lawyer leading a witness you can influence the bot. If done unintentionally, this can be terrible. Or like if you ask it for 10 reasons why vaccines are bad, you get just that. Or "explain how vaccine researchers lie". But You can reverse it and ask "explain how vaccines deniers lie". and it can be more subtle, where if you mention the wrong word, or something earlier in your conversation skews the response.

7 months ago | Likes 3 Dislikes 0

Part of the problem is that we call it AI. It's not AI, it's a language model - it has more in common with autocorrect than it does with artificial intelligence

7 months ago | Likes 479 Dislikes 10

And that's exactly why I don't call it AI, even if that's what the company named the stupid thing.

7 months ago | Likes 2 Dislikes 2

This, exactly. It's autocorrect with a GUI.

7 months ago | Likes 5 Dislikes 2

In gaming "Ai" opponents used to be called "cpu"

7 months ago | Likes 2 Dislikes 1

I've taken to calling them "pAI" - as in, pseudo-AI, after pRNG. Which is what it is, a fake "artificial intelligence," aka not one.

7 months ago | Likes 1 Dislikes 0

I contribute to call it fancy auto-complete. It gets it across what is happening fairly well, and while it's not exactly right, it's also not a wrong way to conceptualize LLMs. So questions to an LLM will probably give me the exact vocabulary I need to search for, but I don't trust the answers to be right without looking at sources directly.

7 months ago | Likes 1 Dislikes 0

I feel like I've been screaming this into the void for the last five years. It. Isn't. AI. It's a fucking language model that you've gone and anthropomorphised.
Stop asking it for factual answers. It isn't looking things up and it cannot cite where it got it's answers from because it just made them up on the spot.

7 months ago | Likes 6 Dislikes 2

It's VI at BEST. I loathe the term "AI"

7 months ago | Likes 2 Dislikes 1

and it's perfect! /S

7 months ago | Likes 2 Dislikes 1

"It's only AI if it comes from the prefrontal cortex of the human brain. Otherwise it's just sparkling market hype."

The definition of AI offered here doesn't match technical definitions of AI OR popular definitions widely accepted right up until ChatGTP came out and needed to be defined as "not AI". The Turing Test was still considered a good indicator of how to identify "real AI" as recently as 10y ago - probably closer to 5y. This is how it's always been - AI "is" what we've not done yet.

7 months ago | Likes 4 Dislikes 1

It's a word calculator.

7 months ago | Likes 4 Dislikes 0

Oxford dictionary definition of AI: The theory and development of computer systems able to perform tasks that normally require human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages.

So, by definition, it's AI. However, sounds like some peoples' definition require the system to have the ability to train itself. Though at the rate other people are wrongly conflating LLMs with AI, that might require an entirely different word.

7 months ago | Likes 6 Dislikes 0

Previous comments I'm too lazy to retype: /gallery/FgObvsQ/comment/2471164211

7 months ago | Likes 3 Dislikes 2

This is a marketing tactic. The deception is intentional because it is lucrative.

7 months ago | Likes 27 Dislikes 0

Should be regulated out of existence and all those profiting heavily from such deception need a rope prepared.

7 months ago | Likes 3 Dislikes 0

Yeah, remember that self-balancing scooter that its makers tried calling a hoverboard?

7 months ago | Likes 11 Dislikes 0

I keep making that exact point but people don't like hearing it. People just slap a fancy sci-fi name onto an undeserving gimmick and misunderstanding spreads like crazy.

7 months ago | Likes 2 Dislikes 0

In the thought experiment, Searle imagines a person who does not understand Chinese isolated in a room with a book containing detailed instructions for manipulating Chinese symbols. When Chinese text is passed into the room, the person follows the book's instructions to produce Chinese symbols that, to fluent Chinese speakers outside the room, appear to be appropriate responses.
https://en.wikipedia.org/wiki/Chinese_room

7 months ago | Likes 11 Dislikes 0

The word "understand" is doing all the heavy lifting in this quote. This person doesn't "understand" Chinese and so (the reasoning goes) no well defined process can therefore produce "understanding". It's just Cartesian dualism in a funny hat.

7 months ago | Likes 3 Dislikes 0

I would say in this case it is however a rather good analogy to how LLMs work: they manipulate symbols according to very large probabilistically generated playbooks, without any kind of process we would recognize as "intelligence". If they happen to fall out of their playbook, or their original generative material is contradictory, they produce wildly incorrect results.

7 months ago | Likes 2 Dislikes 0

It's just Clippy with massively inflated energy consumption

7 months ago | Likes 4 Dislikes 2

It IS autocorrect, it determines the most common response, with math. Its just statistical data with some fancy algerbra, nothing intelligent.

7 months ago | Likes 2 Dislikes 0

I agree with your sentiment. Historically though, it has been difficult defining what AI is supposed to mean, even what "I", as in Intelligence, even is. It is hard to define Intelligence narrow enough that it would not include current day LLM's, while still broad enough to allow for intelligences other than human.

7 months ago | Likes 6 Dislikes 2

A lot of the argument against LLMs being "real" AI seems based on not liking their impact rather than having a coherent vision of what AI would look like. It's probably good to remember that 5-10y ago the Turing Test was still popularly accepted as a "good measure" of what "real AI" would look like... pretty much right up until it could be consistently passed, at which point it was discarded as "obviously wrong". Which fits with the history of AI; "real" AI has always been "what we can't do yet"

7 months ago | Likes 2 Dislikes 2

GiTS scene with the Puppetmaster regarding defining what life is: https://youtu.be/YZX58fDhebc?t=215

7 months ago | Likes 1 Dislikes 0

I don't know a good way to make this concise a concise definition, but the ability to actually process and interact with information, and make decisions, should be key to actually being intelligent in a way that matters.

Fundamentally, at no point is an LLM thinking. It's not utilizing knowledge to come up with an answer, it's just calculating the most likely string of words to come after the previous string of words. The fact that it sounds as human as it does is more an emergent property of-

7 months ago | Likes 2 Dislikes 0

Our language than any particularly impressive capability of the LLMs.

7 months ago | Likes 2 Dislikes 0

But I would love to hear any ideas for such a definition!

7 months ago | Likes 1 Dislikes 0

I've said it before, but an LLM is essentially just a disembodied futuristic Furby who pretends to have a PHD and can actually talk properly.

It's a new techno toy, that's it, it's about as good at being an employee as a Nintendo DS would be as a quantum computer.

7 months ago | Likes 10 Dislikes 3

Have you seen a modern Furby, though? This is a horror even beyond LLMs https://shop.hasbro.com/product/furby-dj-furby-interactive-toy-rainbow/G0668

7 months ago | Likes 5 Dislikes 0

Bubblegum nightmare ....

7 months ago | Likes 2 Dislikes 0

Why did they make it loooonnnggg?!
Also I see that and raise you https://www.youtube.com/watch?v=LEc2up6cBlE&t=1080s

7 months ago | Likes 2 Dislikes 0

I disagree. Generative AI has reasoning capabilities and can use tools to create novel responses and content. It’s not artificial general intelligence, but it is AI. By the way, I work for an AI company, so my response is not totally ungrounded. I recognize that also means my response is not unbiased. So, just my thoughts.

7 months ago | Likes 6 Dislikes 4

At least you're willing to admit your response is biased. Most people in this conversation (here and elsewhere) wouldn't dream of doing so.

(Mind you, I completely agree with your position. It's just a bit frustrating seeing how many people aren't willing to admit that they got to their position by anything but pure, unmotivated reasoning - particularly those arguing against AI.)

7 months ago | Likes 3 Dislikes 2

SMH. It's always easy to spot people that have not used AI for anything, or for more than one specific thing. Just parroting BS they read somewhere.

Try rapid scripting of something that would normally take a ton of time digging through API documentation and then trial & error. Claude can spit out a working script (the first time) based on exact specifications, and also quickly locate a bug in one I wrote years ago before AI. That is not "just a language mode", nor "similar to autocorrect."

7 months ago | Likes 3 Dislikes 2

It literally is. It's looking back on the most commonly used examples it has in its database based on the prompt you've used. It's literally just parroting something someone has already done before and, in the example you used of finding a bug, identifying an irregularity in the script. Auto correct

7 months ago | Likes 2 Dislikes 0

It's also not AGI. Current AI systems are somewhere between, but do exhibit some sort of intelligence. What's amazed me the most is giving Claude instructions on fixing one problem in a github project with hundreds of files, only to have it come back and also fix something else I failed to see needed fixing, but was indirectly related to what I asked it to help me fix.

Writing proper prompts is important. "Do not guess or make up answers. If you don't know, say so. Cite all references."

7 months ago | Likes 3 Dislikes 1

If anyone reading this is in IT, Gemini sucks balls at IT stuff. Just horrible. I went back-and-forth with Gemini on a somewhat complex NGINX + LUA task, only to have it keep telling me to try things I already told it were not valid LUA options.

Gave up and tried Claude. With a single prompt, Claude looked at the code not working and told me that was completely the wrong way to do what I was trying to do, then spit out working alternative code the first time.

ChatGPT is hit or miss.

7 months ago | Likes 1 Dislikes 1

Intelligence is the ability to become better and better at solving a problem. Evolution through natural selection is an intelligence. LLMs are intelligences, and they are artificial. Using the term AI is entirely correct.

7 months ago | Likes 5 Dislikes 2

"But nooooooo, even if that's the technical definition of AI in CompSci, that's not what EVERYONE ELSE means when they've been saying AI for decades!!!"

This popular argument works well just so long as we conveniently forget how recently the Turing Test was cast into the memory hole...

7 months ago | Likes 3 Dislikes 2

If we assume your definition of intelligence is correct, current "AI" models STILL aren't intelligent. They don't get better at solving problems. The model is static. The model does only as well as it can, and then a new model is created (BY HUMANS) with new data, values, and techniques which improve the capabilities over the old model. The old model still exists unchanged, and has not improved by a new model being generated. The new model is not somehow a part of the old model. Not intelligent.

7 months ago | Likes 5 Dislikes 1

That's just because we stopped training at an arbitrary point. It is a snapshot. Evolution exists even though the currently living individuals aren't getting any fitter.

7 months ago | Likes 2 Dislikes 1

I work with AI systems (and LLMs in front of them) that have continually-learning models in the back end.

They exist, but they have limited utility because they can never give the same answer twice, even with low noise.

I think of them as toddlers… you tell them all about firetrucks, and for a few days all they can talk about is firetrucks, until you show them a spaceship…

7 months ago | Likes 3 Dislikes 0

But these machines WONT learn on their own. Their "evolution" will stagnate until an external intelligence (i.e. people) give it new info. That is not intelligence at all. AI can not look at its own code and discover new things about itself it didn't realize.

7 months ago | Likes 3 Dislikes 0

No intelligence, artificial or natural, can learn without being given some form of data and feedback by their environment. And no, current AIs don't rewrite their own code, but they have weights (long, long lists of numbers) they adjust. Just like evolution can adjust DNA.

7 months ago | Likes 1 Dislikes 2

Meh, you could say the same about most of my coworkers

7 months ago | Likes 97 Dislikes 3

I dunno, I wouldn't trust a lot of my co-workers to edit from grammar and spelling.

7 months ago | Likes 2 Dislikes 0

You'd be correct about most of your coworkers

7 months ago | Likes 24 Dislikes 0

I wish my coworkers used autocorrect

7 months ago | Likes 3 Dislikes 0

That's because your coworkers aren't intelligent either.

7 months ago | Likes 5 Dislikes 0

When you work public facing jobs, especially retail, or frequently interact with others in your workplace you find more and more truth to the quote: “Never underestimate the power of stupid people in large groups.”
— George Carlin

7 months ago | Likes 3 Dislikes 0

You're thinking of EDIFICIAL Intelligence. Your coworkers have an edifice that looks like an intelligent creature, but underneath there's just nothing going on.

7 months ago | Likes 7 Dislikes 0

That sounds an awful lot like "they're NPCs."

7 months ago | Likes 4 Dislikes 0

Nah, NPCs can be intelligent

7 months ago | Likes 1 Dislikes 0

And you'd be right to say so.

7 months ago | Likes 2 Dislikes 0

And the fact that it's a language model is very significant, when people keep pretending it's a knowledge model instead.

7 months ago | Likes 44 Dislikes 2

This is the exact problem. Every single disparaging comment here is complaining about this exact issue.

Try using AI for something other than chatting.

Give it a URL and ask it to summarize a recipe so you don't have to read through 10 pages of background from the author.

7 months ago | Likes 2 Dislikes 2

Want to learn a programming language? Ask it to tutor you. Give it an existing small source code and ask it to explain it to you so you can learn how it works. ChatGPT is an excellent tutor.

I've even given it math problems it could not possibly have seen before. Problems that were worded very awkwardly. "You are the teacher and I am a student learning algebra. Explain the solution to this problem, step by step to help me understand it so I can do the next one myself."

7 months ago | Likes 2 Dislikes 1

For bonus points, ask it to give you another math problem that is similar to the one it just explained, but different enough that you can't just guess the answer without understanding how to work the problem.

Give it a PDF and tell it to only refer to that PDF when giving an answer. If it doesn't know, don't make up answers or pull from external sources. Then ask it a question about that document. I've done this with HOA R&R's. Amazing time saver!

7 months ago | Likes 2 Dislikes 1

It's not just a language model, though - there's a reasoning model underlying it *based on that language model* and there are emergent functions performing more straightforward "reasoning". It IS a knowledge model - the argument that "it can't be a knowledge model b/c it makes mistakes" ignores that it's seeking to produce something equivalent to human knowledge. What it's not is a Pure Objective Truth model.

(If by "it's not a knowledge model" you mean it's trying to produce language output

7 months ago | Likes 5 Dislikes 1

instead of some undiluted form of pure reason, you'll be deeply disappointed when you discover how human beings communicate knowledge. I suspect the real problem is that you think human cognition is best described by rational AI systems rather than probabilistic ones - i.e., "truth is binary, not some approximation derived from sampling numerous truth claims" - but that really isn't how human brains work. We're gigantic slow-but-massively-interconnected pattern matching neural networks.)

7 months ago | Likes 3 Dislikes 1

Most of the people commenting here are just parroting some BS they read somewhere else, from people that haven't actually USED AI for more than just casual chatting.

There IS DEFINITELY an underlying intelligence. I found this out the first time I asked ChatGPT to help me write a script (AWS related) for something I could easily do, but would take me a couple of hours of digging through API documentation first. It spit out working code faster than the time it took me to type out the question.

7 months ago | Likes 3 Dislikes 2

I don't see how that proves anything about underlying intelligence, it just means that the tokens of your question were associated with the code it produced. AWS is well used, there is a lot of training data on it online.

as a counter anecdote: I tried asking it something about the godot engine. the suggestion straight up didn't work, because the methods it wanted to use did not exist. it also answered faster than it took me to write the question.

7 months ago | Likes 5 Dislikes 0

Except the "reasoning model" is based on nothing but the LLM going "this is what reasoning sounds like." The "knowledge" it has is just that certain words are statistically likely to appear together, especially based on text in the prompt. Due to the amount of factual articles they've ingested, factual statements are statistically likely to be generated, but in many situations lies are statistically likely too, especially since it can statistically conflate two factually distinct things.

7 months ago | Likes 5 Dislikes 1

For example, Mark Walters sued OpenAI because a reporter asked ChatGPT to summarize a lawsuit, SAF v. Ferguson, and it said the lawsuit involved embezzlement by an SAF treasurer and chief financial officer, later identifying this individual as Mark Walters. It conflated a lawsuit involving Second Amendment Foundation with a prominent public second amendment proponent. To it, SAF and Mark Walters were tokens statistically likely to appear together, so it generated a "statically likely" falsehood.

7 months ago | Likes 4 Dislikes 1

Here's the thing: you're privileging human reasoning without really looking at how it functions. You're also ignoring that by looking at how things co-occur, you can actually discover underlying patterns reflecting "deeper truths". Words represent facts, and relationships between facts. Co-occurrance of words (which includes grammatical structures and meanings encoded through pragmatics) also indicate facts and and relationships between facts.The fact that poor inferences can be made makes it

7 months ago | Likes 2 Dislikes 2

I looked into how it functions, and the "large language models" only model... language. They're entirely trained and structured on generated text that's statistically likely to sound like natural language. That's it. All the other "emergent" behavior people think it has is just due to the large volume of text ingested. And words do NOT inherently represent facts, otherwise lies wouldn't exist. Words represent ideas, and when combined in specific ways they can articulate facts, but just because

7 months ago | Likes 1 Dislikes 0

MORE like intelligence, not less - you seem to think that intelligence needs to be a well-defined set of rules rather than a fuzzy pattern-matching system. The problem is that PEOPLE are fuzzy pattern-matching systems... and even people well-versed in critical thinking are susceptible to falsehoods that have the cadence of logic. My background was comp sci and linguistics, but I'm now in law - trust me, "this is what reasoning looks like" is something smart, well-educated humans do constantly.

7 months ago | Likes 2 Dislikes 1

I don't think we are too far away from companies advertising, "No AI used in this product."

7 months ago | Likes 138 Dislikes 2

“Only real CGI was used making this movie”

7 months ago | Likes 1 Dislikes 0

Wired just yesterday announced an end to using AI in their journalism.

7 months ago | Likes 4 Dislikes 0

People have been putting that disclaimer in front of documentaries posted on YouTube because there's so many making awful AI scripted ones that make no sense.

7 months ago | Likes 24 Dislikes 0

Of course the well is already poisoned because we've redefined the phrase to mean something it isn't. It's a problem for companies, especially game devs, who will be accused of using AI when they're just doing what they've always done.

For example, any game using procedural generation (Minecraft, Warframe, No Man's Sky, etc) can be accused (even if unfairly) of using AI despite it not being even remotely close.

7 months ago | Likes 2 Dislikes 0

I've already put that at the bottom of my resume, in fine print

7 months ago | Likes 4 Dislikes 0

Already a thing with a lot of YouTube music channels too

7 months ago | Likes 2 Dislikes 0

Of course, some of the most notable instances of that will be the places where it never made sense to have "AI" in the first place, parallel to fat free marshmallows and gluten free sheet metal.

7 months ago | Likes 4 Dislikes 0

+1 for "fat free marshmallows" because I bet they taste awful.
(And by "awful" I mean even worse than regular marshmallows.)

7 months ago | Likes 1 Dislikes 0

The "gluten free" thing, it turns out, is as prevalent as it is because gluten is used in a LOT of products you'd never expect, from flavorings that contain malt to medicines that use gluten-containing grains in their inactive ingredients.

7 months ago | Likes 6 Dislikes 0

Already happened. https://ellipsus.com/ has "no generative AI - ever".

7 months ago | Likes 43 Dislikes 0

Curious (as this is still in beta) if you are affiliated or what more you may know about them? I'm an author interested in moving away from Docs/Word and avoiding LLMs.

7 months ago | Likes 2 Dislikes 0

Alas, no. I ran into it a while ago; I was looking for something self-hosted, so it wasn't usable for me.
I *am* working on my own project intended for world-building and such (so basically: DMs, writers, people just casually world-building for fun, etc), but that's a long while from being done (but happy to discuss it if you're interested).

7 months ago | Likes 1 Dislikes 0

I hoped the same thing would've happened in the 'smart' kitchen appliances market by now. Instead my dishwasher can't remember the time because I refuse to connect it to the internet.

7 months ago | Likes 7 Dislikes 0

Yeah the whole 'smart appliances' thing is annoying af

7 months ago | Likes 8 Dislikes 0

Which begs the question, why would your dishwasher need to know the time?

7 months ago | Likes 3 Dislikes 0

It projects the time it expects to be finished on the floor, instead of counting down the minutes until it's done. The time makes no sense, so I don't know when it'll be done. Doesn't really matter, but it shouldn't have to be this way.

7 months ago | Likes 2 Dislikes 0

I'm pretty sure we're already at the point where people say "No AI used" when it actually has been used.

7 months ago | Likes 17 Dislikes 0

There are shit tons upon shit tons of music channels on YouTube that are clearly AI generated but claim not to be. And the algorithm promotes them aggressively, too, due to their consistently high output.

7 months ago | Likes 8 Dislikes 0

Didn't I read they were going to try to clamp down on those?

7 months ago | Likes 2 Dislikes 0

What, _again?_ They've made noises to that effect several times before, but actually doing it would mean changing the algorithm AND doing a bunch of hands-on work, and they seem to be deathly averse to both.

7 months ago | Likes 1 Dislikes 0

Being right sucks sometimes.

7 months ago | Likes 3 Dislikes 0

Welcome to the future, choomba! We don't have elective cybernetics, orbital factories or true artificial intelligence, but we've got the dystopia!

7 months ago | Likes 4 Dislikes 0

I told chatgpt "say something that sounds like an answer"

It replied: "42."

7 months ago | Likes 95 Dislikes 2

That's actually pretty clever. It's from "The Hitchhiker's Guide to The Galaxy"...

7 months ago | Likes 5 Dislikes 0

Haha, stupid ChatGPT. It forgot the 0 at the end

7 months ago | Likes 2 Dislikes 2

I sentence you to your local library, you can be released when you realise why you are more stupid than ChatGPT

7 months ago | Likes 3 Dislikes 0

There's computers at my local library, I can just ask ChatGPT and I'm a free stupid man again

7 months ago | Likes 1 Dislikes 1

I mean, if you don't know why it made you look how you looked, you won't know what parameters to set a ChatGPT prompt to say. Or you could ask a librarian, and when they finish laughing, they'll send you to F ADA or SF ADA (or 823.914 ADA if you are at a university library)

7 months ago | Likes 2 Dislikes 0

Information source: none (instruction-following only)
No live search

Observed fact: You asked me to "say something that sounds like an answer."
Fulfilling that literally, here is a sentence that has the cadence and tone of an answer, without needing a real question:

> "The outcome depends on how the underlying conditions interact, but with the right adjustments the system should remain stable."

7 months ago | Likes 2 Dislikes 0

We are doing it right

7 months ago | Likes 2 Dislikes 0

For some reason they've recently made it quippy. It'll make jokes, if you want it or not, and tries to be silly.

It's not particularly funny, but sometimes it's so bad it's actually amusing again.
No idea what great technical innovation a digital clown is supposed to be tho ...

7 months ago | Likes 15 Dislikes 0

You know that stage when people think they're images of their gods...

7 months ago | Likes 6 Dislikes 0

The secret to life

7 months ago | Likes 1 Dislikes 0

Say something that sounds like a question!

7 months ago | Likes 1 Dislikes 0

Forty-TWO!?

7 months ago | Likes 15 Dislikes 0

In this economy?

7 months ago | Likes 7 Dislikes 0

But what question does that answer address?

7 months ago | Likes 6 Dislikes 0

It is, famously, the answer to all questions.

7 months ago | Likes 4 Dislikes 0

I'm sure there are some people who don't pay attention to sci-fi pop culture, so, in the book/movie A Hitchhiker's Guide To The Galaxy, a super-computer gets asked "What is the meaning of life, the universe and everything?" It takes something like 1000 years to compute an answer, and the answer it gives is 42. Hitchhiker's Guide is kind of a satire of sci-fi.

7 months ago | Likes 4 Dislikes 0

More like 7½ million years… but what was the question the answer to life, the universe, and everything was answering?

7 months ago | Likes 1 Dislikes 0

It is the answer to life. The universe! And everything!

It's from hitchhiker's guide to the galaxy. They ask a super computer what is the answer to life, the universe, and everything and after much toil it responds 42.

7 months ago | Likes 4 Dislikes 0

It is, to be concise: The Answer to the Ultimate Question of Life, the Universe, and Everything

7 months ago | Likes 10 Dislikes 0

But what's the actual question?

7 months ago | Likes 4 Dislikes 0

Nobody played along when I asked that :(

7 months ago | Likes 1 Dislikes 0

The Earth is a supercomputer purpose built to determine just that. No word if any progress has been made at this point.

7 months ago | Likes 7 Dislikes 0

The ultimate question, literally means the last question. And the last question in the books is: "Where you getting out mate? " Arthur points at a flat and answers: "Forty two"

7 months ago | Likes 1 Dislikes 0