It’s actually kind of worrisome that they have to guess it was his code based on the function/method name. Do these people not use version control? I guess not, they sure as hell don’t do code reviews if this guy managed to get this code into production
Yeah I see what you mean. There’s a decent argument to be made that something like reasoning appears as an emergent property in this kind of system, I’ll admit. Still, the fact that fundamentally the code works as a prediction engine rules out any sort of real cognition, even if it makes an impressive simulacrum. There’s just no ability to invent, no true novelty, which – to my mind at least – is the hallmark of actual reasoning.
It’s real. https://en.wikipedia.org/wiki/XAI_(company)
an open source reasoning AI
It’s still an LLM right? I’m going to have to take issue with your use of the word ‘reasoning’ here
To be clear. It’s not just “the law” it’s the goddamn constitution