But in the post warning users that the company will call the authorities if they seem like they’re going to hurt someone, OpenAI also acknowledged that it is “currently not referring self-harm cases to law enforcement to respect people’s privacy given the uniquely private nature of ChatGPT interactions.”
Consider how the US handles those cases, that may actually be a broken-clock good thing. If they sent the cops to a suicidal person’s house said cops would probably kill them themselves.
All of it will be justified with that guy who killed himsefl after talking to ChatGPT.
Nah, not for suicide:
Oh, so only for discussing topics the authorities consider verboten
Consider how the US handles those cases, that may actually be a broken-clock good thing. If they sent the cops to a suicidal person’s house said cops would probably kill them themselves.
Oh thank god I was afraid some more kids might not get talked into suicide by a fucking server
we must protect the kids we don’t care about at all costs. except when they’re israeli pedophiles
🦉 447
Or the friends of the US President, or the US president…