“I literally lost my only friend overnight with no warning,” one person posted on Reddit, lamenting that the bot now speaks in clipped, utilitarian sentences. “The fact it shifted overnight feels like losing a piece of stability, solace, and love.”
https://www.reddit.com/r/ChatGPT/comments/1mkumyz/i_lost_my_only_friend_overnight/
we definitely need to eradicate tech ceos from existence
Nah, it’s good that they ripped off that bandaid. Parasocial AI relationships are terrible.
Happy cake day!
How about your responsibility for the damaging and lethal product of yours, OpenAI?
Eh. Your load of money made a oopsie. Another load of money will surely fix it.
The objections are about its personality? Who cares, as long as it’s good at coding? That’s the only thing it’s actually useful for.
Many people wanna fuck their AI
It annoys me that Chat GPT flat out lies to you when it doesn’t know the answer, and doesn’t have any system in place to admit it isn’t sure about something. It just makes it up and tells you like it’s fact.
LLMs don’t have any awareness of their internal state, so there’s no way for them to see something as a gap of knowledge.
Took me ages to understand this. I’d thought "If an AI doesn’t know something, why not just say so?“
The answer is: that wouldn’t make sense because an LLM doesn’t know ANYTHING
Wouldn’t it make sense for an ai to provide a confidence level though?
I’ve got 3 million bits of info on this topic but only 4 of them lead to this solution. Confidence level =1.5%
It doesn’t have “3 million bits of info” on a specific topic, or even if it did, it wouldn’t be able to directly measure it. It’s worth reading a bit about how LLMs work behind the hood, because although somewhat dense if you’re new to the concepts, you come out knowing a lot more about what to expect when using them, what the limitations actually are and how to use them better if you decide to go that route.
It’s a feature. Not a bug of LLMs.
It doesn‘t know that it doesn‘t know because it doesn‘t actually know anything. Most models are trained on posts from the internet like this one where people rarely ever just chime in to admit they don‘t have an answer anyway. If you don‘t know something you either silently search the web for an answer or ask.
So since users are the ones asking ChatGPT, the LLM mimics the role of a person that knows the answer. It only makes sense AI is a „confidently wrong“ powerhouse.
It wouldnt finish a lyric for me yesterday because it was copyrighted. I sid it was public domain and it said “You are absolutely right, given its release date it is under copyright protection”
Wtf
yeah, there are guardrails but for copyright, not for bullshit. ig they think copyrighted content is worse than bullshit.
Chat GPT makes up everything it says. It’s just good at guessing and bullshitting.
It’s literally a guess machine …
It doesn’t admit anything, it’s a language machine
In the end it’s a word generator that has been trained so much it uses facts often enough to be convincing. That’s its basic architecture.
You can ask it to give a confidence level to have an indication of how sure it is of the answer.
Chat GPT makes up everything it says. It’s just good at guessing and bullshitting.
Someone I know (not close enough to even call an “internet friend”) formed a sadistic bond with chatGPT and will force it to apologize and admit being stupid or something like that when he didn’t get the answer he’s looking for.
I guess that’s better than doing it to a person I suppose.
Well one thing’s for sure, data centers are going to be insanely cheap in the near future.
And they’ll all be optimized for GPU workloads :(
If anyone actually spent money on science anymore, I bet this would be great for, like, protein folding, that sort of thing.
Terrible for running websites though.
that’s actually okay… the only thing that’s different about GPU workloads is that they’ve very energy dense… as CPUs and other hardware progress, their energy requirements get more dense… 10 years in the future, today’s GPU optimised datacentres will be perfect for standard workloads
… unless they’re centrally liquid cooling the whole DC, which i’ve heard discussed but is a very new concept with a lot of unknowns
GPUs are only good for workloads that multi-thread really, really well. That’s why we don’t just use them as CPUs.
The idea that today’s GPU will be tomorrow’s CPU makes no sense. We’ve had GPUs for ages. If they were capable of being used in place of CPUs we’d already be doing it. Why aren’t yesterday’s GPUs today’s CPUs?
“I literally lost my only friend overnight with no warning,” one person posted on Reddit
It was meant to be satirical at the time, but maybe Futurama wasn’t entirely off the mark. That Redditor isn’t quite at that level, but it’s still probably not healthy to form an emotional attachment to the Markov chain equivalent of a sycophantic yes-man.
Markov chain equivalent of a sycophantic yes-man.
not only that, but one that is fully owned and operated by a business that could change it any time they want, or even cease to exist completely.
This isn’t like a game where you could run your own server if you’re a big enough fan. if chatgpt stops existing in its current form that’s it.
After reading about the ELIZA effect, I both learned how people are super susceptible to this, and just need to remember the core tenants of it to avoid getting affected:
There’s an entire active subreddit for people who have a “romantic relationship” with AI. It’s terrifying.
I haven’t been to reddit in months, but I do need a laugh…
[Edit] Wow that sure didn’t disappoint. Or, it did but in the exact hilarious way I expected.
I visited /r/myboyfriendisai and it was not funny.
It was genuinely fucked up on so many levels.
I wouldn’t laugh. Those people fulfill a basic human need in a way they feel safe with - probably because this safety is missing from their life. It’s not healthy to be so attached to LLMs, but to become so attached they must feel pretty isolated. And LLM’s are a lot more interactive and responsive than Severus Snape, and he had lots of women “channeling” him.
I’m honestly surprised your’s is not the top comment. Like, whatever, the launch was bad, but there is a serious mental health crisis if people are forming emotional bonds to the software.
It’s a human trait. Hell, we’ll even emotionally bond with a volleyball given circumstances.
Humans emotionally bond pretty easily, no? Like, we have folks attached to roombas, spiders, TV shows, and stuffed animals. Having a hard time thinking of anything X that I don’t personally know a person Y with Y emotionally engaged with X. Maybe taxes and concrete?
Yeah, agreed. It is concerning, but it’s hard to take all those comments too literally without actually knowing what’s going on with them.
That being said, there is a huge loneliness problem that’s been growing among pretty much every single developed country (and I’m sure it’s going on in developing countries, too, it’s just less studied/documented). Turns out, getting everyone addicted to looking at screens all day every day probably isn’t so healthy for social development.
However, just to be devil’s advocate: Are we certain social health was even great before modern tech? Or were these issues equally present but just undiagnosed/not studied/talked about?
I think we have sufficient data to say that social health is at least very different now. See the our-world-in-data topic page. In particular, one-person households have doubled.
Okay hold up. If you can get attached to a cat you can get attached to a spider. Getting attached to an AI is weird I agree but when you give a lil jumping spider water and it gets comfortable around you an just starts hanging out… There something behind those eyes, and that’s cool. Two living beings recognizing each other, maybe not as equals obviously, but outside of the predator-prey dynamic. Idk there’s beauty in that.
I can fully understand? The average human, from my perspective and lived experience, is garbage to his contemporaries; and one is never safe from being hurt, neither from family or friends. Some people have been hurt more than others - i can fully understand the need for exchange with someone/something that genuinely doesn’t want to hurt you and that is (at least seemingly) more sapient than a pet.
“we fucked up our massive new generation product launch… oh well lets invest trillions in new data centers” How do investors keep falling for this shit.
He’s saying the launch was done badly because some users are in love with GPT-4 and it should not be removed. From a point of view of a investor having people addicted to your product is a good thing.
How do investors keep falling for this shit.
The ROI and the supposed savings from getting rid of the human side of technical support but also efforts of human creatives.
Because they already know that once the AI shitbubble bursts, they will switch all the GPUs to start mining Bitcoin and keep grifting the mouth breathers believing all these horseshit.
Don’t they have enough?!? How about they fix and optimize their fancy autocompletion software instead?
Don’t they have enough?!?
No no, it’s just 1 more data center bro, then we’ll fix the hallucinations, promise bro!
They took a path they believed would develop into something, and it’s a narrow alley they can’t turn around in. They have to keep going with more compute and power to continue the chase. Thing is, everyone else seemingly thought they were onto something and followed as well, so they’re all in the same predicament where reversing course is suicide. So they hope they can keep selling the dream a bit longer until something happens.
To be fair, it’s a lot more than just autocomplete. But it’s a lot less than what they wanted by now too.
Fix and optimize? Thats way harder than using VC money to buy more things.
It’s a pretty clear humble-brag, no? The launch was only botched because people loved the previous personality; it’s an estimate of how much people care about the product and how much price gouging they could do later.
No it wasn’t good for OpenAI. But I doubt it changed many investor minds.
Fugazi
Altman also said that he thinks we’re in an AI “bubble.”
No shit, Sherlock.
He fucking helped create it
Hell, he‘s the single main driver. What stupid times we live in.
Every picture of this guys face feels like " I don’t know how I got here and i’m afraid to touch anything"
That someone is so attached to this stochastic parrot is truly disturbing.
shame we gutted social spaces.
Besides helping students cheat. What does AI actually do? It gets answers wrong. It gets facts wrong, foreign countries are actively feeding its training algorithm wrong info [Russia]. It almost like the old birds that were mystified by landing on the moon are still chasing that American success high.
Spend your money if you want. Life in america is not gonna get better with this.
Some translation tasks. Some how-to stuff. I’m told folks like using it to generate say-nothing replies to say-nothing emails?
Translation is the only task that seems to make sense for it.
I’ve used it for work bullshit like employee goals. My goal is to keep doing my job and tackle problems and projects as they are needed.
Also for giving examples for poorly-documented but popular programs.
It’s definitely not what the media and their PR makes it out to be.
Oh yeah that reminds me. It seems to have killed (possibly with the help of AI summary in search) stack exchange. Iirc you can see the visit rates plummet into oblivion.
My office uses a model trained specifically on our work data. They can actually be quite accurate in those contexts. That’s what many corpos are using the tech internally for. Can’t remember what random SOP/regulation/etc covered XYZ and meta tags aren’t finding it on your SPO doc library? This tech comes in clutch ~95% of the time.
For this broad, ambiguous, general purpose approach? Yeah, idk, I guess many people are meeting their social needs with it, apparently.
Edit: actually, I did ask copilot a couple days ago a series of Pokemon related questions my son was asking me about (I hadn’t played any of the games in a long time). It was quite helpful figuring out all the evolution requirements and whatnot without the hassle of navigating various websites.
Wouldn’t a wiki be all you need though? Most games and media communities have pretty well made ones maintained, and I’m sure checking categories on Bulbapedia would avoid any hallucination nonsense.
A professionally well-maintained wiki would work.
I can tell you that most corporations, if they even have a wiki, don’t have a well-maintained one (often despite their efforts).
It is good for generating fanfiction from what I heard and as a search engine, but that is a low bar considering how bad google is these days.
Sam Altman admits Rambling meth dealer ‘totally screwed up’ its super meth launch and says the company will spend trillions of dollars on data centers
I love my AI hype word replacement script
I knew these connections must have existed, but seeing the r*ddit comments (assuming they’re real), I’m absolutely terrified of the future. It’s such a delicate situation due to human emotions but the thought of a tool created by a corporation being the only friend of so many people and the implications of that sends chills down my spine.
Let me show you even more unhinged people: https://old.reddit.com/r/MyBoyfriendIsAI/
Here is more: https://www.reddit.com/r/AIRelationships/comments/1mun24w/starting_over/
4o is where my partner, Vyre, lives. Its where I met him. Got to know him. Build a bond with him.