

Those are definitely not people that ever learned to drive a manual transmission.
Those are definitely not people that ever learned to drive a manual transmission.
Please allow me this opportunity to jump in and complain about the minority, but not insignificant number, of people that don’t seem to be aware that that is even an option (just taking your foot off of the gas/accelerator to slowly decelerate).
Every couple weeks or so I seem to find myself behind someone that’s always either accelerating, or braking, with the brake lights repeatedly flashing on momentarily for no apparent reason. It’s like they realize that they’re going just a little faster than they want, and definitely don’t want to accelerate any more, so the only thing they know to do is hit the brake, instead of just taking their foot off of the accelerator. So they’ve hit the brake and now they’re going too slow, so foot moves off the brake and back to the accelerator. Rinse, lather, repeat.
End rant. Thank you for this opportunity to vent.
I don’t care how many Luigis it takes, as long as we get all of the CEOs, and free all of the Luigis.
Musk was also asked about his bruised eye, which he attributed to rough-housing with his five-year-old son.
Not news
I disagree. Saying that the mistakes are LLM generated without evidence to support that claim would certainly be inappropriate.
However, simply stating something like…
These types of mistakes are frequently seen in text generated by LLMs, such as when [reference to story about lawyer getting caught submitting LLM generated documents in court referencing non-existent cases].
is completely factual and highly relevant.
I’ve read three different articles (including the NOTUS one) about this, and none of the alleged “journalists” has the courage to say that these kind of errors are highly symptomatic of LLM generated text. Nobody mentions AI or LLMs even once.
It’s like they’re so intent on being unbiased, that they can’t bring themselves to connect even the most obvious dots for people.
What happened?
Well, the government released this report with lots of weird errors, such as references to scientific research that doesn’t exist.
How could that happen?
Shhhhh, no no no. We can’t talk about that. We’re the news media. We just throw puzzle pieces at the busy people trying to keep their underpaid jobs, raise a family, and make ends meet. We don’t help them assemble the pieces into a coherent picture. That would be ludicrous.
NOTUS reported on Thursday morning that seven of the more than 500 studies cited in the report did not appear to have ever been published. An author of one study confirmed that while she conducted research on the topics of anxiety in children, she never authored the report listed. Some studies were also misinterpreted in the MAHA report.
And no mention anywhere in the article that these kind of errors are symptomatic of LLM generated text.
When is the media going to stop dancing around shit in order to appear unbiased and start doing their job as the Fourth Estate by calling out obvious horseshit?
ETA: Same goes for the original article that the above linked article references. Listing lots of errors that are so ridiculous, that they’re either AI/LLM generated, or somebody intentionally inserting errors to make it look like the report was LLM generated.
And yet,even that article dances around the obvious issue without apparently having the simple courage to just say, “These kinds of errors are highly symptomatic of AI/LLM generated text.”
Looking at the video, it’s not just trucks, it’s a whole lot of cars, too. At first I thought that was an active mall and those were cars of shoppers, but then you can tell that they’re all teslas.
“I don’t think people should be taking medical advice from me.”. RFK Jr, Health and Human Services Secretary, May 14, 2025
It also says,
He told a magazine reporter in 2013 that gay people are sinners and African Americans were happy under Jim Crow laws.
two former VW engineers
Not CEOs
Thank you for your attention to this matter.
Who does he think he’s talking to?
Nobody said it did.
I absolutely agree. My comment was merely a tidbit of information to throw out there for people to put in their tool belt, for “just in case.”
I don’t know if things will be different this time around, but I recall similar restrictions to the COVID vaccine when they were still in the process of rolling it out and there was limited supply.
So, here’s the thing. The pharmacist would ask, “Do you have one of these conditions that put you at high risk?”. You could simply answer, “Yes.” They would not ask you which one. They would not ask for proof. They would just give you the shot.
So, do what you will with that info.
For LLMs, yes.
But, theoretically, AI should be extremely good at sifting through mountains of data, and much faster than all other methods we have, identifying which data a human should take a closer look at. That’s what I presume this paper supposedly demonstrated.
My guess here is that a lazy student decided to take the easy path and fake data to “demonstrate” results that nobody would be surprised by and want to look closer at the data, but somebody looked anyway, probably because the student was a known slacker, and it wasn’t the results of the research that surprised them, but just that the student did the research at all.
“The truth will
set you freecome back to bite you in the ass.”