Test subjects who consulted AI were overwhelmingly willing to accept its answers without scrutiny, whether correct or not.

  • floofloof@lemmy.ca
    link
    fedilink
    arrow-up
    2
    ·
    2 days ago

    I wonder what made them imagine an LLM could answer questions that involve reasoning about firewall configurations. It’s just not the kind of thing LLMs are good at, since they can’t parse something specialist like that or create a model then simulate applying the rules one by one to evaluate the end state. An LLM will improvise a plausible-sounding answer based on similar-looking questions and answers in its training data, and that’s not trustworthy at all.