

Generative AI has an average error rate of 9-13%. Nobody should trust it wholesale and what it spits out.
It has some excellent use cases. Vibe code/sysadmin/netadmin’ing are not one of those things.


Generative AI has an average error rate of 9-13%. Nobody should trust it wholesale and what it spits out.
It has some excellent use cases. Vibe code/sysadmin/netadmin’ing are not one of those things.


As someone who works in network engineering support and has seen Claude completely fuck up people’s networks with bad advice: LOL.
Literally had an idiot just copying and pasting commands from Claude into their equipment and brought down a network of over 1000 people the other day.
It hallucinated entire executables that didn’t exist. It asked them to create init scripts for services that already had one. It told them to bypass the software UI, that had the functionality they needed, and start adding routes directly to the system kernel.
Every LLM is the same bullshit guessing machine.


Which part of the industry?


I had no idea what the acronym was. Guess I’m just sheltered or something.


Your brain runs on ChatGPT now. Better start eating a diet of NVidia GPUs.


I find it hard to believe he was just casually carrying around bullets days after allegedly shooting someone…


This is highly dependant on where you live, as has been said before.


God dammit iFixIt WHY? I liked you.


It’s not that he just denied treatment. He ordered his company to deny treatment FOR COVERED ITEMS according to the insurance plan. This caused people to not get life saving care, die, and no longer be a “burden” on their bottom line. That IS murder. Premeditated.
That’s like seeing someone hanging from a ledge of a cliff because they fell and, instead of helping, they stomped on their fingers so they plunged to their death at the bottom.
The CEO was responsible for more death than his alleged killer by a several tens of thousands fold ratio.


I set mine up with HAProxy for TLS offloading and ACME for the server cert. Restrict your access to just your country/region by GeoIP and you are pretty good to go.


If Gavin Newsom is shoved through as the Democrat choice this time, we’re fucking cooked. Which is why they’ll probably do it, because the DNC are a bunch of fucking paid off stooges.


Mint is based on Ubuntu LTS. The packages aren’t THAT out of date. Most people don’t give a shit if they’re running the bleeding edge of kernels or what version of mesa is installed. If it works with their hardware, they’re good.


Yeah that’s the annoying thing. Generative AI is actually really useful…in SPECIFIC situations. Discovering new battery tech, new medicines, etc. are all good use cases because it’s basically a parrot and blender combined and most of these things are rehashes if existing technologies in new and novel ways.
It is not a fucking good solution for a search engine replacement to ask “Why do farts smell?”. It uses way too much energy for that and it hallucinates bullshit.


India’s government is a corrupt shithole aiming to become a police state similar to how the US is headed. The solution is not to privatize nuclear. It’s to hold the government to task and make them fix their fuck ups.
Also, corporations do not “innovate”. They take government grant money to fund their pet research projects. If the government didn’t fund their research and development, we wouldn’t have advancement.


Congrats. You just burned down 4 trees in the rainforest for every article you had an LLM analyze.
LLMs can be incredibly useful, but everybody forgets how much of an environmental nightmare this shit is.


Opening nuclear energy to corporate interests that prioritize cost cutting and maximum profits…what could go wrong with that? /s
I’m curious… What problems are you referring to?
There was a study done several months ago. I’ll try to find the source again and link it here in a comment edit.
[EDIT]
Couldn’t find the study, but here is an infographic that has similar data: https://infogram.com/ai-hellucination-report-2025-1h9j6q7mz7q354g
Hallucinations have improved, but the average for all models is still around 8-9%. Likely the study I read before was from last year, so the higher hallucinations rate is from that.
Even the best model hallucinates an average of around 1 in 100 times and OpenAI has stated publicly that hallucinations are mathematically impossible to eliminate entirely.