• 0 Posts
  • 22 Comments
Joined 3 years ago
cake
Cake day: June 13th, 2023

help-circle
  • the machine does not judge, the operator may, but that is not really a factor to many but is an abstract invasion. you can’t whisper your problems to the person sitting next to you without multiple companies trying to sell you a solution.

    to someone in need an ai is an attractive option, this has no bearing on if that machine is qualified to do so. and while i get the urgency to move people over to any better option, shame is the quickest way to push someone to the end of the rope.

    do not blame those who struggle. any port in a storm







  • ok so, i have large reservations with how LLM’s are used. but when used correctly they can be helpful. but where and how?

    if you were to use it as a tutor, the same way you would ask a friend what a segment of code does, it will break down the code and tell you. and it will get as nity grity, and elementary school level as you weir wish without judgement, and i in what ever manner you prefer, it will recommend best practices, and will tell you why your code may not work with the understanding that it does not have the knowledge of the project you are working on. (it’s not going to know the name of the function you are trying to load, but it will recommend checking for that in trouble shooting).

    it can rtfm and give you the parts you need for any thing with available documentation, and it will link to it so you can verify it, wich you should do often, just like you were taught to do with wikipedia articles.

    if you ask i it for code, prepare to go through each line like a worksheet from high school to point out all the problems, wile good exercise for a practicle case, being the task you are on, it would be far better to write it yourself because you should know the particulars and scope.

    also it will format your code and provide informational comments if you can’t be bothered, though it will be generic.

    again, treat it correctly for its scope, not what it’s sold as by charletons.






  • WraithGear@lemmy.worldtoTechnology@lemmy.world*Permanently Deleted*
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    6 months ago

    You mean when the training data becomes more complete. But that’s the thing, when this issue was being tested, the’AI’ would swear up and down that the normally filled wine glasses were full, when it was pointed out that it was not indeed full, the ‘AI’ would agree, and change some other aspect of the picture it didn’t fully understand. You got wine glasses where the wine would half phase out of the bounds of the cup. And yet still be just as empty. No amount of additional checks will help without an appropriate reference

    I use ‘AI’ extensively, i have one running locally on my computer, i swap out from time to time. I don’t have anything against its use with certain exceptions. But i can not stand people personifying it beyond its scope

    Here is a good example. I am working on an APP so every once in a wile i will send it code to check. But i have to be very careful. The code it spits out will be unoptimized like: variable1=IF (variable2 IS true, true, false) .

    Some have issues with object permanence, or the consideration of time outside its training data. Its like saying a computer can generate a true random number, by making the function to calculate a number more convoluted.



  • WraithGear@lemmy.worldtoTechnology@lemmy.world*Permanently Deleted*
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    1
    ·
    edit-2
    6 months ago

    “it was unable to link the concepts until it was literally created for it to regurgitate it out“

    -WraithGear

    The’ problem was solved before their patch. But the article just said that the model is changed by running it through a post check. Just like what deep seek does. It does not talk about the fundamental flaw in how it creates, they assert if does, like they always did



  • WraithGear@lemmy.worldtoTechnology@lemmy.world*Permanently Deleted*
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    6 months ago

    Yes, on the second part. Just rearranging or replacing words in a text is not transformative, which is a requirement. There is an argument that the ‘AI’ are capable of doing transformative work, but the tokenizing and weight process is not magic and in my use of multiple LLM’s they do not have an understanding of the material any more then a dictionary understands the material printed on its pages.

    An example was the wine glass problem. Art ‘AI’s were unable to display a wine glass filled to the top. No matter how it was prompted, or what style it aped, it would fail to do so and report back that the glass was full. But it could render a full glass of water. It didn’t understand what a full glass was, not even for the water. How was this possible? Well there was very little art of a full wine glass, because society has an unspoken rule that a full wine glass is the epitome of gluttony, and it is to be savored not drunk. Where as the reference of full glasses of water were abundant. It doesn’t know what full means, just that pictures of full glass of water are tied to phrases full, glass, and water.




  • WraithGear@lemmy.worldtoTechnology@lemmy.world*Permanently Deleted*
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    4
    ·
    edit-2
    6 months ago

    If what you are saying is true, why were these ‘AI’s” incapable of rendering a full wine glass? It ‘knows’ the concept of a full glass of water, but because of humanities social pressures, a full wine glass being the epitome of gluttony, art work did not depict a full wine glass, no matter how ai prompters demanded, it was unable to link the concepts until it was literally created for it to regurgitate it out. It seems ‘AI’ doesn’t really learn, but regurgitates art out in collages of taken assets, smoothed over at the seams.