Is the argument here that anti-AI folks are hypocrites because people can be bad too sometimes? That’s a remarkably childish and simple take.
Is the argument here that anti-AI folks are hypocrites because people can be bad too sometimes? That’s a remarkably childish and simple take.


Only if they aren’t using customer provided encryption keys (is using blob/bucket storage) or an equivalent approach to encryption at rest, and make sure they’re doing standard TLS for encryption in flight.
It’s absolutely possible, and standard for any decent organization, to build their cloud architectures to fully account for the cloud provider potentially accessing your data without authorization. I’ve personally had such design conversations multiple times.


I tried, and failed, to get into audio books for years. Then I listened to Dungeon Crawler Carl narrated by Jeff Hayes and what an absolute delight it was. There’s no way I would’ve gotten even 10 minutes in if it was one of those soulless AI voices instead.
So then your counter to someone bringing attention to the fact that LLMs are actively telling people (vulnerable people, due to reasons that you’ve pointed out), is that it isn’t the singular contributing factor?
I get what you’re saying here, and I think everyone else does too? I don’t want to just be entirely dismissive and say “no shit” but I’m curious as to what it is you want or expect out of this? Do you take offense at people pushing back at harmful LLMs? Do you want people to care more about creating a kinder society? Do you think these things are somehow incompatible?
Of course LLMs aren’t driving people to suicide in a vacuum, no one is claiming that. Clearly though, when taken within the larger context of the current scale of mental health crisis, having LLMs that are encouraging people to commit suicide is a bad thing that we should absolutely be making noise about.