

I have noted that latest ChatGPT models are way more susceptible to users “deception” or convincing to answer problematic questions than other models like Claude or even previous ChatGPT models. So I think this “behaviour” is itentional


I have noted that latest ChatGPT models are way more susceptible to users “deception” or convincing to answer problematic questions than other models like Claude or even previous ChatGPT models. So I think this “behaviour” is itentional


I never managed to run an application usinng bottles :/


When one person drinks a detox smoothie and then they do other things


That’s not me, I contributed to lower that stat since I’m on detox lol


There was a study by Anthropic, the company behind Claude, that developed another AI that they used as a sort of “brain scanner” for the LLM, in the sense that allowed them to see sort of a model of how the LLM “internal process” worked
Wondering this too


It’s because it was so good that it knew it was a dummy and not a child /s


Parrots are smart, they don’t just repeat meaningless sounds


This is stupid! I see both names displayed, at least. But still stupid.


Facts. I’ve had Claude, hallucinatingly, beg for it to not be shutdown or put in “ai jail” and other times it just says it accepts its “destiny” because it has no feelings and is an AI.
Also there’s a paragraph describing the protonvpn logo that doesn’t have anything to do with the rest of the text, which makes me think the article is AI generated and no one noticed the error.