I used keepass since ages and about two years ago I switched to a self-hosted vaultwarden instance and I still think it was a great choice. So of you have a docker experience and a little VM lying around you could give vaultwarden/Bitwarden a try.
I used keepass since ages and about two years ago I switched to a self-hosted vaultwarden instance and I still think it was a great choice. So of you have a docker experience and a little VM lying around you could give vaultwarden/Bitwarden a try.
It’s not called Meta data by accident 🤣
“Amazingly” fast for bio-chemistry, but insanely slow compared to electrical signals, chips and computers. But to be fair the energy usage really is almost magic.
But by that definition passing the Turing test might be the same as super human intelligence. There are things that humans can do, but computers can’t. But there is nothing a computer can do but still be slower than humans. That’s actually because our biological brains are insanely slow compared to computers. So once a computer is better or as accurate as a human it’s almost instantly superhuman at that task because of its speed. So if we have something that’s as smart as humans (which is practically implied because it’s indistinguishable) we would have super human intelligence, because it’s as smart as humans but (numbers made up) can do 10 days of cognitive human work in just 10 minutes.
AI isn’t even trained to mimic human social behavior. Current models are all trained by example so they produce output that would score high in their training process. We don’t even know (and it’s likely not even expressable in language) what their goals are but (anthropomorphised) are probably more like “Answer something that humans that designed and oversaw the training process would approve of”
To be fair the Turing test is a moving goal post, because if you know that such systems exist you’d probe them differently. I’m pretty sure that even the first public GPT release would have fooled Alan Turing personally, so I think it’s fair to say that this systems passed the test at least since that point.
We don’t know how to train them “truthful” or make that part of their goal(s). Almost every AI we train, is trained by example, so we often don’t even know what the goal is because it’s implied in the training. In a way AI “goals” are pretty fuzzy because of the complexity. A tiny bit like in real nervous systems where you can’t just state in language what the “goals” of a person or animal are.
JPEG does not support lossless compression. There was an extension to the standard in 1993 but most de/encoders don’t implement that and it never took off. With JPEG XL you get more bang for your buck and the same visual quality will get you a smaller file. There would be no more need for thumbnails because of improved progressive decoding.
JPEG does not support lossless compression. There was an extension to the standard in 1993 but most de/encoders don’t implement that and it never took off. With JPEG XL you get more bang for your buck and the same visual quality will get you a smaller file. There would be no more need for thumbnails because of improved progressive decoding.
It’s not just like jpeg with extra channels. It’s technically far superior, supports loss less compression, and the way the decompression works would make thumbnails obsolete. It can even recompress already existing JPEGs even smaller without additional generation loss. It’s hard to describe what a major step this format would be without getting very technical. A lot of operating systems and software already support it, but the Google chrome team is practically preventing widespread adoption because of company politics.
He is comparing himself to Caesar? Maybe someone should stab him to help him out.
Thank you for taking the time to read it and your feedback.
Your replies here come off as pretty condescending.
That was definitely never my intention but a lot of people here said something similar. I should probably work on my English (I’m not a native speaker) to phrase things more carefully.
You shouldn’t just say “did you read the article” and then “it’s in this section of the article”
It never crossed my mind this could be interpreted in a negative way. I tried to gauge if someone read it and still disagreed or if someone didn’t read it and disagrees, because those situations are two different things, at least for me. The hint with the sections was also meant as a pointer because I know that most people won’t read the entire thing but maybe have 5min on their hand to read the relevant section.
Pretty obvious that you didn’t read the article. If you find the time I’d like to encourage you to read it. I hope it clears up some misconceptions and make things clearer why even in those 60+ years it was always intellectually dishonest to call 1024 byte a kilobyte.
You should at least read “(Un)lucky coincidence”
Any backups of the repository itself (and not the GitHub rendering)?