• 0 Posts
  • 11 Comments
Joined 2 years ago
cake
Cake day: September 11th, 2023

help-circle

  • Hoimo@ani.socialtoTechnology@lemmy.worldautofocus glasses
    link
    fedilink
    English
    arrow-up
    7
    ·
    edit-2
    5 days ago

    How do prescriptions for glasses even work on your side of the pond? I assumed it was just jargon of a sort, because round these parts I just go to a glasses seller and ask him for his strongest glasses. Then he says “no traveller, my strongest glasses are too strong for you, you can’t handle my strongest glasses” and does the eye test with me before making lenses at the proper strength.


  • I’m not sure if I understand what you’re talking about exactly. With root I can access all files on my device (including /data/data, where app internal files are kept) and I can give permission to apps to access all files too, it they ask for it. Not that I’d want that, because it’s way safer to keep user data in /storage/emulated/0 and give read permissions on file or folder level (like /Pictures for a gallery app, or just the picture I want to share for a social media app).

    If you want to share data between apps, the easiest way would be to give them access to the same folder in user space. That isn’t OS maker’s generosity, that’s basic security controls.





  • Hoimo@ani.socialtoTechnology@lemmy.world*deleted by creator*
    link
    fedilink
    English
    arrow-up
    1
    ·
    3 months ago

    Image classification model isn’t really “AI” the way it’s marketed right now. If Google used an image classification model to give you holiday recommendations or answer general questions, everyone would immediately recognize they use it wrong. But use a token prediction model for purposes totally unrelated to predicting the next token and people are like “ChatGPT is my friend who tells me what to put on pizza and there’s nothing strange about that”.





  • This is probably because of a lack of training data, where it is referencing only one example and that example just had a mistake in it.

    The one example could be flawless, but the output of an LLM is influenced by all of its input. 99.999% of that input is irrelevant to your situation, so of course it’s going to degenerate the output.

    What you (and everyone else) needs is a good search engine to find the needle in the haystack of human knowledge, you don’t need that haystack ground down to dust to give you a needle-shaped piece of crap with slightly more iron than average.