• 0 Posts
  • 97 Comments
Joined 2 years ago
cake
Cake day: June 23rd, 2023

help-circle
  • enkers@sh.itjust.workstoTechnology@lemmy.worldI am disappointed in the AI discourse
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    1
    ·
    edit-2
    10 days ago

    Appreciate the correction. Happen to know of any whitepapers or articles I could read on it?

    Here’s the thing, I went out of my way to say I don’t know shit from bananas in this context, and I could very well be wrong. But the article certainly doesn’t sufficiently demonstrate why it’s right.

    Most technical articles I click on go through step by step processes to show how they gained understanding of the subject material, and it’s layed out in a manner that less technical people can still follow. And the payoff is you come out with a feeling that you understand a little bit more than what you went in with.

    This article is just full on “trust me bro”. I went in with a mediocre understanding, and came out about the same, but with a nasty taste in my mouth. Nothing of value was learned.


  • enkers@sh.itjust.workstoTechnology@lemmy.worldI am disappointed in the AI discourse
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    4
    ·
    edit-2
    10 days ago

    I’ll preface this by saying I’m not an expert, and I don’t like to speak authoritatively on things that I’m not an expert in, so it’s possible I’m mistaken. Also I’ve had a drink or two, so that’s not helping, but here we go anyways.

    In the article, the author quips on a tweet where they seem to fundamentally misunderstand how LLMs work:

    I tabbed over to another tab, and the top post on my Bluesky feed was something along these lines:

    ChatGPT is not a search engine. It does not scan the web for information. You cannot use it as a search engine. LLMs only generate statistically likely sentences.

    The thing is… ChatGPT was over there, in the other tab, searching the web. And the answer I got was pretty good.

    The tweet is correct. The LLM has a snapshot understanding of the internet based on its training data. It’s not what we would generally consider a true index based search.

    Training LLMs is a costly and time consuming process, so it’s fundamentally impossible to regenerate an LLM in the same order of magnitude of time it takes to make a simple index.

    The author fails to address any of these issues, which suggests to me that they don’t know what they’re talking about.

    I suppose I could conceded that an LLM can fulfill a similar role that a search engine traditionally has, but it’d kinda be like saying that a toaster is an oven. They’re both confined boxes which heat food, but good luck if you try to bake 2 pies at once in a toaster.






  • I have a hard time considering something that has an immutable state as sentient, but since there’s no real definition of sentience, that’s a personal decision.

    Technical challenges aside, there’s no explicit reason that LLMs can’t do self-reinforcement of their own models.

    I think animal brains are also “fairly” deterministic, but their behaviour is also dependent on the presence of various neurotransmitters, so there’s a temporal/contextual element to it, so situationally our emotions can affect our thoughts which LLMs don’t really have either.

    I guess it’d be possible to forward feed an “emotional state” as part of the LLM’s context to emulate that sort of animal brain behaviour.






  • Just to be clear, they were fully transparent about it:

    “Hello, just to be clear for everyone seeing this, I am a version of Chris Pelkey recreated through AI that uses my picture and my voice profile,” the stilted avatar says. “I was able to be digitally regenerated to share with you today. Here is insight into who I actually was in real life.”

    However, I think the following is somewhat misleading:

    The video goes back to the AI avatar. “I would like to make my own impact statement,” the avatar says.

    I have mixed feelings about the whole thing. It seems that the motivation was genuine compassion from the victim’s family, and a desire to honestly represent victim to the best of their ability. But ultimately, it’s still the victim’s sister’s impact statement, not his.

    Here’s what the judge had to say:

    “I loved that AI, and thank you for that. As angry as you are, and as justifiably angry as the family is, I heard the forgiveness, and I know Mr. Horcasitas could appreciate it, but so did I,” Lang said immediately before sentencing Horcasitas. “I love the beauty in what Christopher, and I call him Christopher—I always call people by their last names, it’s a formality of the court—but I feel like calling him Christopher as we’ve gotten to know him today. I feel that that was genuine, because obviously the forgiveness of Mr. Horcasitas reflects the character I heard about today. But it also says something about the family, because you told me how angry you were, and you demanded the maximum sentence. And even though that’s what you wanted, you allowed Chris to speak from his heart as you saw it. I didn’t hear him asking for the maximum sentence.”

    I am concerned that it could set a precedent for misuse, though. The whole thing seems like very grey to me. I’d suggest everyone read the whole article before passing judgement.