

AI companies are making a choice when they design unsafe platforms.
The right choice.
Technology to prevent this harm already exists: Anthropic’s Claude, for example, consistently tried to dissuade users from acts of violence.
That shit’s awfully condescending & paternalistic.
AI platforms are becoming a weapon for extremists and school shooters.
For deficient plans: AI gets shit wrong so often, we should probably encourage idiots to concoct their “foolproof” plans on it.
Demand AI companies put people’s safety ahead of profit.
Nah: thought isn’t action. Liberty means respecting others’ freedom to have “unsafe” thoughts. Someone else could pose the same questions to audit security weaknesses & prepare safety plans.
Moreover, all of this was already possible with a search engine & notes. Information alarmists can get fucked.




Still unnecessary & less effective than less invasive alternatives that already exist & the government could promote. To quote another comment