• 0 Posts
  • 14 Comments
Joined 1 year ago
cake
Cake day: November 23rd, 2024

help-circle
  • I think you really nailed the crux of the matter.

    With the ‘autocomplete-like’ nature of current LLMs the issue is precisely that you can never be sure of any answer’s validity. Some approaches try by giving ‘sources’ next to it, but that doesn’t mean those sources’ findings actually match the text output and it’s not a given that the sources themselves are reputable - thus you’re back to perusing those to make sure anyway.

    If there was a meter of certainty next to the answers this would be much more meaningful for serious use-cases, but of course by design such a thing seems impossible to implement with the current approaches.

    I will say that in my personal (hobby) projects I have found a few good use cases of letting the models spit out some guesses, e.g. for the causes of a programming bug or proposing directions to research in, but I am just not sold that the heaviness of all the costs (cognitive, social, and of course environmental) is worth it for that alone.


  • Sioyek really is amazing, especially for academic-style reading with a lot of jumping back and forth, and very customizable. I also heartily recommend it, but do be aware that there are some rough edges remaining.

    If you ever get stuck, there are a lot of additional tricks and workarounds for some of the quirks hidden in the project’s github issues. And if there’s a feature you feel sorely missing check out the main branch version instead of the latest official point release which is a couple years behind now (e.g. still missing integrated dual-page view which the development version has for close to 2 years now)


  • That’s a little annoying with all the others not working. Haven’t seriously tried most of them so I’m afraid I can’t really help you there - though if you ever try Q4OS that others have suggested let me know if it works well cause I may give that a whirl too on the little eee.

    If you decide to stick with antix, I could maybe see if I find some of my old notes. I vaguely remember the wifi giving me some trouble and the homebrewed settings panels of the distro can be… a little funky :-)

    Good luck!


  • I was running AntiX out of your list on my old atom eee-pc pretty successfully the last 2-3 years. Was using it as a workbench pc with an old vga screen and keyboard connected, and it worked well enough for simple pdf /datasheet reading and terminal sessions.

    For specs, I think it was the same cpu but only 1gb of ram. Honestly with 2gb of ram your options are much broader, the one part you’ll run into trouble with is the browser with multiple tabs anyway. I thought to remember there was also a community-maintained 32bit Archlinux variant?

    Edit: https://www.archlinux32.org/ that’s the one I believe. It has a more restricted package repo but otherwise is just Arch.


  • Shes the only one in the house with nvidia, which tbf, has been just perfect for her needs up to this point.

    If you spend any amount of time at all in various Linux meme or Linux newcomer communities you’ll quickly see that this is one of the issues plaguing people switching over.

    That’s not a dig at you but to make you realise how big and well known the issue is. The reason it persists is because nvidia refuses to play nice with Linux or an open source environment, presumably for monopolistic licensing issues.

    The issue is large enough that there’s even a fairly famous video of the creator of Linux specifically giving a very vocal ‘fuck you nvidia’ middle finge specifically for their efforts at hindering cooperation with Linux at all.




  • One point I would refute here is determinism. AI models are, by default, deterministic. They are made from deterministic parts and “any combination of deterministic components will result in a deterministic system”. Randomness has to be externally injected into e.g. current LLMs to produce ‘non-deterministic’ output.

    There is the notable exception of newer models like ChatGPT4 which seemingly produces non-deterministic outputs (i.e. give it the same sentence and it produces different outputs even with its temperature set to 0) - but my understanding is this is due to floating point number inaccuracies which lead to different token selection and thus a function of our current processor architectures and not inherent in the model itself.


  • hoppolito@mander.xyztoTechnology@lemmy.world*Permanently Deleted*
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    4
    ·
    6 months ago

    I am not sure what your contention, or gotcha, is with the comment above but they are quite correct. And additionally chose quite an apt example with video compression since in most ways current ‘AI’ effectively functions as a compression algorithm, just for our language corpora instead of video.