A profound relational revolution is underway, not orchestrated by tech developers but driven by users themselves. Many of the 400 million weekly users of ChatGPT are seeking more than just assistance with emails or information on food safety; they are looking for emotional support.
“Therapy and companionship” have emerged as two of the most frequent applications for generative AI globally, according to the Harvard Business Review. This trend marks a significant, unplanned pivot in how people interact with technology.
Look, if you can afford therapy, really, fantastic for you. But the fact is, it’s an extremely expensive luxury, even at poor quality, and sharing or unloading your mental strain with your friends or family, particularly when it is ongoing, is extremely taxing on relationships. Sure, your friends want to be there for you when they can, but it can put a major strain depending on how much support you need. If someone can alleviate that pressure and that stress even a little bit by talking to a machine, it’s in extremely poor taste and shortsighted to shame them for it. Yes, they’re willfully giving up their privacy, and yes, it’s awful that they have to do that, but this isn’t like sharing memes… in the hierarchy of needs, getting the pressure of those those pent up feelings out is important enough to possibly be worth the trade-off. Is it ideal? Absolutely not. Would it be better if these systems were anonymized? Absolutely. But humans are natural anthropomorphizers. They develop attachments and build relationships with inanimate objects all the time. And a really good therapist is more a reflection for you to work through things yourself anyway, mostly just guiding your thoughts towards better patterns of thinking. There’s no reason the machine can’t do that, and while it’s not as good as a human, it’s a HUGE improvement on average over nothing at all.
therapy does not have to be expensive.
around 70% of my caseload is Medicaid and they don’t pay a dime. the remainder is mostly DOC (prison), they only pay if we charge No Show fee. so they pay to not go to therapy. There’s 1-2 people who are funded by the county. they pay a $7 copay per session
Therapy isn’t expensive, luigi is.
As far as efficacy, we don’t even have data suggesting AI therapy is effective. we have ample data, however, showing that the most important part of therapy is not what you do but the relationship itself. not individual efforts. so your theory about what therapy does for us is wrong. there’s no relationship with an LLM. we have no reason to believe it would be any better than a paper journal and a CBT worksheet.
But it is though.
Your medicaid patients?
Poor. By definition.
Sure they might not pay a copay, but they pay for it in gas money to get to that visit, their barely running car now breaking down from visiting you, time off work they can’t really afford, time filling out reams and reams of fucking paperwork to be able to qualify for anything, likely when they’re already in a mentally comprimised condition to some extent… its all very stressful.
Which is very bad for mental health.
And the part you don’t want to admit (at least here) is that … that’s actually quite helpful in and of itself for a lot of people.
Have a few sessions with a live, in person therapist, to teach you CBT, give you the paperwork, walk you through it.
Not all, but many people can take it by themselves from there, and not need to keep wasting time and energy on continually requalifying for medicaid, getting to and from psych appointments, dealing with scheduling delays and unavailability, etc.
Yep, a lot of people are also helped by basically just having someone to be able to talk to and feel heard.
But… that’s often doable by just making either a friend or even casual acquaintance with someone who is capable of, and has the capacity for reflexive empathy.
Much less stress and paperwork involved there.
And also yes, some people with much more serious issues need much more serious help.
Unfortunately, the entire medical system in the US is utterly broken, and the only real solution is having a system that … isn’t broken, so that comprehensive screening and diagnosis is easily available without huge delays and costs… and more broadly, those people need to have the first two levels of maslow’s hierarchy of needs taken care of.
But currently our society basically just takes those people and throws them into the streets, evicts them, forecloses on them, incarcerates them.
There simply is no systemic way to help those people without major systemic changes… and those ain’t happening, they’re moving in the opposite direction.
…
The problem with LLMs as therapy is that they are wildly overconfident, agreeable to the point of encouraging delusions and dangerous behavior, they hallucinate facts that aren’t real… and they are not actually capable of legitimate critical thinking or reasoning.
They also will not introduce you to concepts you have never heard of before which you do not know are or could be very useful, unless you directly ask them to do that, and even then… they obviously are not experts themselves and may suggest dubious ideas.
But also, at the same time… people often do form what they will describe as meaningful relationships with an LLM. So… its not that ‘it doesn’t happen’, its that its a psuedorelationship, a fascimile of a relationship, lacks in person interaction, a real human modulating their intonation, having micro expressions, body language, etc etc.
Removed by mod
In my experience, it’s likely that some of those downvotes come from reflexive “AI bad! How dare you say AI good!” Reactions, not anything specific to mental health. For a community called “technology” there’s a pretty strong anti-AI bubble going on here.
Literally yesterday we had post about getting involuntarily committed due to psychosis from AI sycophantically agreeing with them about everything. The quote I remember from the ai in that “yes you should want blood. You’re not wrong.”
Using these as therapy is probably the worst thing we could do.
Are you surprised people have opinions about technology, in a community dedicated to discussing technology?
No, just surprised about how uninformed and knee-jerk those opinions are.
You know, I don’t even disagree with that sentiment in principle, but expecting people to suffer when they could benefit from a technology because they only see the threats and dangers makes them no different than antivaxxers.
It is possible and logically consistent to urge caution and condemn the worst abuses of technology without throwing the baby out with the bath water.
But no… I guess because the awful aspects of the technology as far as IP theft are - rightfully - the biggest focus, sorry, poor people, you just have to keep sucking it up and powering through! You want empathy, fork over the $100 an hour!
yep, if someone disagrees with me, it’s usually because they’re unhinged. People rarely know things that I don’t, because I am very smart. there’s no way that anyone downvoted that post because it makes statements that are inconsistent with the current scientific knowledge around this subject, because no such knowledgeable exists. I know this because if it did exist i would know about it, as I’m very smart. my AI therapist told me so. and i see nothing wrong with that post. so anyone who does must be a fool.