Test subjects who consulted AI were overwhelmingly willing to accept its answers without scrutiny, whether correct or not.

  • hostileempathy@lemmy.zip
    link
    fedilink
    arrow-up
    1
    ·
    3 days ago

    Not surprising. We always want to take the path of least resistance. I mean 20 years ago, you could access the world’s information via the internet, but you had to know how to search for things. We slowly went from that to “I’ll just google it.” and “Well, google says.” Now that we have LLM’s (which IMHO are mostly just fancier, faster google search) we have people legitimately saying “let me ask ChatGPT” and “ChatGPT says.”

    I think there is a positive future for LLM’s and true AI, but its not right now, and it definitely is not in the hands of capitalists.

  • hperrin@lemmy.ca
    link
    fedilink
    English
    arrow-up
    1
    ·
    4 days ago

    I mentioned that I don’t use AI today, and the person I was talking to was really surprised. They didn’t understand how I could not use AI for anything.

    • FlashMobOfOne@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      3 days ago

      Partly why I support age-gating some things are the scary stories I hear from schoolteachers I know: about whole classrooms of kids that have trouble concentrating on anything for more than 60 seconds, or how they hear every day: “If AI can do this, why do I have to learn it in the first place?”

      (And don’t come at me about age gating because I don’t care to argue about it.)

    • valkyre09@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      4 days ago

      I saw somebody in work upload a firewall config xml and start querying if stuff was blocked. I actually thought it was a pretty clever use of it.

      I probably wouldn’t trust it to write a config and upload it back, but for an assistant to an untrained eye it was pretty solid.

      I’ve also used copilot for silly things like

      “Take these 10 lines of process steps, make them sound professional and format them for easy reading”.

      Stuff like that isn’t my job, but when it lands on my desk it’s a quick way to get it down and back to what I’m supposed to be focussing on.

      This is a long way of saying, there are definitely use cases, but nobody’s being replaced.

      • floofloof@lemmy.ca
        link
        fedilink
        arrow-up
        2
        ·
        1 day ago

        I wonder what made them imagine an LLM could answer questions that involve reasoning about firewall configurations. It’s just not the kind of thing LLMs are good at, since they can’t parse something specialist like that or create a model then simulate applying the rules one by one to evaluate the end state. An LLM will improvise a plausible-sounding answer based on similar-looking questions and answers in its training data, and that’s not trustworthy at all.

      • okamiueru@lemmy.world
        link
        fedilink
        arrow-up
        2
        ·
        edit-2
        3 days ago

        I saw somebody in work upload a firewall config xml and start querying if stuff was blocked. I actually thought it was a pretty clever use of it.

        I would find it some place between worrisome and you-should-lose your-job, depending on how important that firewall is. This might seem exaggerated, but if your colleague had showed that config to a child, and then asked them yes and no questions, a game to which the child happily participated in. I would consider that exactly as reasonable, and exactly as responsible, as asking an LLM. Imagine someone doing this, for an important firewall config… and taking the child’s answers at face value. It should be fair to think that this person is grossly unqualified, and showing a dangerous lack of judgment.

        And, that’s just the issues I would have regarding using a bullshit generator as a source of truth. If the firewall config could be considered sensitive information, uploading that to a third party, would be grounds for dismissal for entirely separate reasons.

    • Tyrq@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      1
      ·
      4 days ago

      Yeah, thats kinda the problem, they couldn’t think for themselves, and would rather trust a hallucinating autocomplete program

      • Dumhuvud@programming.dev
        link
        fedilink
        English
        arrow-up
        1
        ·
        4 days ago

        would rather trust a hallucinating autocomplete program

        I mean, outsourcing your thinking would still negatively affect your cognitive capabilities even if you were to rely on something actually intelligent.

  • orioler25@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    3 days ago

    Yeah, what a new and scary thing that started with AI and nothing else ever. Jfc, AI has been the best thing for liberal moralistic arguments of social degeneracy since social media.