• 3 Posts
  • 18 Comments
Joined 2 years ago
cake
Cake day: June 26th, 2023

help-circle


  • I do not agree with the idea that humans are being trained to act like robots. Any company with a customer service department is likely tracking the root causes of their customers’ issues. With enough data, they can identify the most common problems and their solutions. If the goal is to resolve a customer’s issue as quickly as possible (which seems like a reasonable assumption), it makes sense to guide the customer through the most common solutions first, as that will likely solve the problem.

    If someone works in customer service and repeats the same script daily, it’s understandable that they may come across as robotic due to sheer boredom. A skilled customer service representative can recognize when to use the script and when to deviate. However, if a company fails to hire the right people and does not offer a fair salary, those best suited for the role are unlikely to take the job.


  • It appears this was a Victim impact statement.

    A victim impact statement is a written or oral statement made as part of the judicial legal process, which allows crime victims the opportunity to speak during the sentencing of the convicted person or at subsequent parole hearings.

    From the article (emphasizes mine):

    But the use of AI for a victim impact statement appears novel, according to Maura Grossman, a professor at the University of Waterloo who has studied the applications of AI in criminal and civil cases. She added, that she did not see any major legal or ethical issues in Pelkey’s case.

    "Because this is in front of a judge, not a jury, and because the video wasn’t submitted as evidence per se, its impact is more limited," she told NPR via email.











  • At that point you may as well have used the search engine in the first place.

    I was going to respond to this but I think you did so yourself:

    I have asked searchy type questions and got some interesting links back which I probably wouldn’t have found on a normal web search with the terms I was using

    I think they work as supplements and not replacements. As any tool, they have their use and (for me) can enhance my searching. But I would not replace it with only LLM. (Altough I have never had any great luck with ChatGPT and links, they never work - as in ChatGPT give me an anchor element without any link. It’s better at providing me search terms and concepts to look up for what I need.)



  • I find ChatGPT to sometimes be excellent at giving me a direction, if not outright solving the problem, when I paste errors I’m to lazy to look search. I say sometimes because othertimes it is just dead wrong.

    All code I ask ChatGPT to write is usually along the lines for “I have these values that I need to verify, write code that verifies that nothing is empty and saves an error message for each that is” and then I work with the code it gives me from there. I never take it at face value.

    Have you actually found that to be the case in anything complex though?

    I think that using LLMs to create complex code is the wrong use of the tool. They are better at providing structure to work from rather than writing the code itself (unless it is something simple as above) in my opinion.

    If a company cannot invest even a day to go through their hiring process and AI proof it, then they have a shitty hiring process. And with a shitty hiring process, you get shitty devs.

    I agree with you on that.




  • I think that LLMs just made it easier for people who want to know but not learn to know. Reading all those posts all over the internet required you to understand what you pasted together if you wanted it to work (not always but the barr was higher). With ChatGPT, you can just throw errors at it until you have the code you want.

    While the requirements never changed, the tools sure did and they made it a lot easier to not understand.