US senator Bernie Sanders amplified his recent criticism of artificial intelligence on Sunday, explicitly linking the financial ambition of “the richest people in the world” to economic insecurity for millions of Americans – and calling for a potential moratorium on new datacenters.

Sanders, a Vermont independent who caucuses with the Democratic party, said on CNN’s State of the Union that he was “fearful of a lot” when it came to AI. And the senator called it “the most consequential technology in the history of humanity” that will “transform” the US and the world in ways that had not been fully discussed.

“If there are no jobs and humans won’t be needed for most things, how do people get an income to feed their families, to get healthcare or to pay the rent?” Sanders said. “There’s not been one serious word of discussion in the Congress about that reality.”

  • jj4211@lemmy.world
    link
    fedilink
    arrow-up
    2
    ·
    1 day ago

    It is a different substrate for reasoning, emergent, statistical, and language-based, and it can still yield coherent, goal-directed outcomes.

    That’s some buzzword bingo there… A very long winded way of saying it isn’t human-like reasoning but you want to call it that anyway.

    If you went accept that reasoning often fails to show continuity, well then there’s also the lying.

    Examining a reasoning chain around generating code for an embedded control scenario. At one point it says the code may effect the behavior of how a motor is controlled, and so it will test if the motor operates.

    Now the truth of the matter is that the model has no access to perform such a test, but the reasoning chain is just a fiction, so it described a result, asserting that it performed the test and it passed, or failed. Not based on a test, but by text prediction. So sometimes it says it failed, then carries on as if it passed, sometimes it decides to redo some code to address the error, but leaves it broken in real life. Of course it can claim it works when it didn’t at all. It can show how “reasoning” can help though. If the code is generated based on one application, but when applied to a motor control scenario, people had issues and so generating the extra text caused it to zero in on some stack overflow thread where someone made a similar mistake.

    • auraithx@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      1
      ·
      20 hours ago

      I didn’t call it human-like reasoning? Just that reasoning isn’t limited to human-like reasoning.

      Have already covered your other points in this comment