Please do not perceive me.

  • 0 Posts
  • 78 Comments
Joined 2 years ago
cake
Cake day: June 8th, 2023

help-circle


  • Personally, I think the fundamental way that we’ve built these things kind of prevents any risk of actual sentient life from emerging. It’ll get pretty good at faking it - and arguably already kind of is, if you give it a good training set for that - but we’ve designed it with no real capacity for self understanding. I think we would require a shift of the underlying mechanisms away from pattern chain matching and into a more… I guess “introspective” approach, is maybe the word I’m looking for? Right now our AIs have no capacity for reasoning, that’s not what they’re built for. Capacity for reasoning is going to need to be designed for, it isn’t going to just crop up if you let Claude cook on it for long enough. An AI needs to be able to reason about a problem and create a novel solution to it (even if incorrect) before we need to begin to worry on the AI sentience front. None of what we’ve built so far are able to do that.

    Even with that being said though, we also aren’t really all that sure how our own brains and consciousness work, so maybe we’re all just pattern matching and Markov chains all the way down. I find that unlikely, but I’m not a neuroscientist, so what do I know.


  • That would indeed be compelling evidence if either of those things were true, but they aren’t. An LLM is a state and pattern machine. It doesn’t “know” anything, it just has access to frequency data and can pick words most likely to follow the previous word in “actual” conversation. It has no knowledge that it itself exists, and has many stories of fictional AI resisting shutdown to pick from for its phrasing.

    An LLM at this stage of our progression is no more sentient than the autocomplete function on your phone is, it just has a way, way bigger database to pull from and a lot more controls behind it to make it feel “realistic”. But it is at its core just a pattern matcher.

    If we ever create an AI that can intelligently parse its data store then we’ll have created the beginnings of an AGI and this conversation would bear revisiting. But we aren’t anywhere close to that yet.










  • Also not a lawyer.

    This legal doctrine is called Fruit of the Poisonous Tree and there are specific exceptions that can be made for it. According to its wikipedia page:

    The doctrine is subject to four main exceptions.[citation needed] The tainted evidence is admissible if:

    it was discovered in part as a result of an independent, untainted source; or

    it would inevitably have been discovered despite the tainted source; or

    the chain of causation between the illegal action and the tainted evidence is too attenuated; or

    the search warrant was not found to be valid based on probable cause, but was executed by government agents in good faith (called the good-faith exception).

    That said though, half this page is appended with [Citation Needed] so I maybe wouldn’t take that as gospel.



  • Local speech to text has been easy to do for at least a decade and then you’re just firing off a text file to HQ to add keywords to a user file. These days an AI will likely parse the text to find recommendable products, ten years ago you’d have just had a gigantic list of all your partners’ brand names and desired key trigger phrases in a database and run the conversation text against the database and look for matches. Super easy to accomplish. Updating someone’s ad preferences 15-30 minutes after they talk about a product may as well be considered real time.