• 0 Posts
  • 14 Comments
Joined 2 years ago
cake
Cake day: September 30th, 2023

help-circle

  • I’m not excluding hiring good teachers and TAs from the picture. I’m not excluding paying them a good enough wage to attract talent either. But that’s another conversation.

    In my university days lectures were paired with seminars. And those had a max size of about 30, and a TA who would explain and help apply the lecture knowledge. The lecturer would visit seminars on rotation and ensure the quality of TAs. And the kicker? The whole gang would be there for the (free form) exam, including the grading.

    In short: it can be done because that’s where we come from, actually.

    And personally I hate multi choice tests, there is no opportunity to see the thought process of the student, or find and be lenient towards those that got the theory, but forgot to carry a 1 somewhere. They simplified the grading, sure, now you can have a machine do it, but thats about it.




  • I think you nailed it. In the grand scheme of things, critical thinking is always required.

    The problem is that, when it comes to LLMs, people seem to use magical thinking instead. I’m not an artist, so I oohd and aahd at some of the AI art I got to see, especially in the early days, when we weren’t flooded with all this AI slop. But when I saw the coding shit it spewed? Thanks, I’ll pass.

    The only legit use of AI in my field that I know of is an unit test generator, where tests were measured for stability and code coverage increase before being submitted to dev approval. But actual non-trivial production grade code? Hell no.


  • ZeroGravitas@lemm.eetoTechnology@lemmy.world*deleted by creator*
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    1
    ·
    3 months ago

    You know, I was happy to dig through 9yo StackOverflow posts and adapt answers to my needs, because at least those examples did work for somebody. LLMs for me are just glorified autocorrect functions, and I treat them as such.

    A colleague of mine had a recent experience with Copilot hallucinating a few Python functions that looked legit, ran without issue and did fuck all. We figured it out on testing, but boy was that a wake up call (colleague in question has what you might call an early adopter mindset).