“I literally lost my only friend overnight with no warning,” one person posted on Reddit, lamenting that the bot now speaks in clipped, utilitarian sentences. “The fact it shifted overnight feels like losing a piece of stability, solace, and love.”
https://www.reddit.com/r/ChatGPT/comments/1mkumyz/i_lost_my_only_friend_overnight/
It annoys me that Chat GPT flat out lies to you when it doesn’t know the answer, and doesn’t have any system in place to admit it isn’t sure about something. It just makes it up and tells you like it’s fact.
LLMs don’t have any awareness of their internal state, so there’s no way for them to see something as a gap of knowledge.
Took me ages to understand this. I’d thought "If an AI doesn’t know something, why not just say so?“
The answer is: that wouldn’t make sense because an LLM doesn’t know ANYTHING
Wouldn’t it make sense for an ai to provide a confidence level though?
I’ve got 3 million bits of info on this topic but only 4 of them lead to this solution. Confidence level =1.5%
It doesn’t have “3 million bits of info” on a specific topic, or even if it did, it wouldn’t be able to directly measure it. It’s worth reading a bit about how LLMs work behind the hood, because although somewhat dense if you’re new to the concepts, you come out knowing a lot more about what to expect when using them, what the limitations actually are and how to use them better if you decide to go that route.
It’s a feature. Not a bug of LLMs.
It doesn‘t know that it doesn‘t know because it doesn‘t actually know anything. Most models are trained on posts from the internet like this one where people rarely ever just chime in to admit they don‘t have an answer anyway. If you don‘t know something you either silently search the web for an answer or ask.
So since users are the ones asking ChatGPT, the LLM mimics the role of a person that knows the answer. It only makes sense AI is a „confidently wrong“ powerhouse.
It doesn’t admit anything, it’s a language machine
Chat GPT makes up everything it says. It’s just good at guessing and bullshitting.
It’s literally a guess machine …
It wouldnt finish a lyric for me yesterday because it was copyrighted. I sid it was public domain and it said “You are absolutely right, given its release date it is under copyright protection”
Wtf
yeah, there are guardrails but for copyright, not for bullshit. ig they think copyrighted content is worse than bullshit.
In the end it’s a word generator that has been trained so much it uses facts often enough to be convincing. That’s its basic architecture.
You can ask it to give a confidence level to have an indication of how sure it is of the answer.
Someone I know (not close enough to even call an “internet friend”) formed a sadistic bond with chatGPT and will force it to apologize and admit being stupid or something like that when he didn’t get the answer he’s looking for.
I guess that’s better than doing it to a person I suppose.
Chat GPT makes up everything it says. It’s just good at guessing and bullshitting.