

Where did you get that info? Or do you just hold water for healthcare CEO prosecutions for fun?
Where did you get that info? Or do you just hold water for healthcare CEO prosecutions for fun?
This is a really interesting paragraph to me because I definitely think these results shouldn’t be published or we’ll only get more of these “whoopsie” experiments.
At the same time though, I think it is desperately important to research the ability of LLMs to persuade people sooner rather than later when they become even more persuasive and natural-sounding. The article mentions that in studies humans already have trouble telling the difference between AI written sentences and human ones.
It’s semantics. The difference between an llm and “asking” wikipedia a knowledge question is that the llm will “answer” you with predictive text. Both things contain more knowledge than you do, as in they have answers to more trivia and test questions than you ever will.
“the vaccine is useful for some protection and for other people it wanes”
Yeah and the measles makes some people immune and for some people they die.
Edit: also, what kind of argument is that in the first place? “Yeah well sometimes it provably provides a positive benefit to people’s lives in a statistically significant way, and sometimes it’s neutral with no downsides???”
Some More News had the right take on this: all these companies just dumped (either in investment or development) (hundreds of) billions of dollars into AI development.
The problem is, we’re still 10-15 years away from AI being actually useful in gadgets and stuff. But these companies want to get paid now, so they’re shoving the cheapest, shittiest “functional” AI onto the market just to try and recoup some losses. And it’s painfully apparent it isn’t working.
Calexit is becoming less and less of a meme every day.