Background in hard sciences, computing (FOSS), electronics, music, Zen.

  • 22 Posts
  • 67 Comments
Joined 2 years ago
cake
Cake day: October 2nd, 2023

help-circle


  • Manipulating the atoms in a crystal to store info is extremely high-precision, as is verifying the accuracy of the write). So is reading positions down to a few nanometers, But consumers wouldn’t need a $6000 reader to get, say, 10GB dumped to a hard drive … you’d carry your crystal and 16GB drive down to the corner store and user their reader to dump sector 37BJ to the drive. No need to trust them with your platter … but are you exposing all 360TB to potential damage from the machine?





  • To quote ChatGPT:

    “Large Language Models (LLMs) like ChatGPT cannot accurately cite sources because they do not have access to the internet and often generate fabricated references. This limitation is common across many LLMs, making them unreliable for tasks that require precise source citation.”

    It also mentions Claude. Without a cite, of course.

    Reliable information must be provided by a source with a reputation for accuracy … trustworthy. Else it’s little more than a rumor. Of course, to reveal a source is to reveal having read that source … which might leave the provider open to a copyright lawsuit.


  • Finding inconsistencies is not so hard. Pointing them out might be a -little- useful. But resolving them based on trustworthy sources can be a -lot- harder. Most science papers require privileged access. Many news stories may have been grounded in old, mistaken histories … if not on outright guesses, distortions or even lies. (The older the history, the worse.)

    And, since LLMs are usually incapable of citing sources for their own (often batshit) claims any – where will ‘the right answers’ come from? I’ve seen LLMs, when questioned again, apologize that their previous answers were wrong.




  • I have to admit I’ve only ever used it to translate a paragraph or two at a time… where I was just looking for the gist of a text.

    Not too surprising, considering that for centuries many people well-versed in two languages have made a very good living as translators … and often having to get delicate nuances across (for poets as well as statesmen). It’s as much art as science.

    overwriting articles written by humans with machine generated translations. I really don’t understand that! But then, there are truckloads of worthwhile texts from throughout history that will never see translations otherwise … so that’s a worthy cause. Over time it may be improved, IF the algos are given feedback that allows them to learn from mistakes.





  • Sooo… you’re looking for volunteers to join your discord … no website to learn more … and get involved with … who, where, what, why completely unknown … to ‘collaborate remotely’ to ‘foster critical thinking’ … with little mention of what ‘volunteers’ will get in return … that’s all a very vague come-on inviting complete strangers to cooperate with you … an completely unknown organization, no mention of your qualifications, no mention of who’s paying for this (podcasting is not free) or why … that is SO NOT TEMPTING