• FearMeAndDecay@literature.cafe
    link
    fedilink
    English
    arrow-up
    8
    ·
    1 day ago

    That’s bc chatbots are sycophantic. So initially it gives the answer it’s trained to give and then as you talk to it it learns that you want it to say x instead so it says x

    • ozymandias@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 day ago

      that’s generally true but it over simplifies it a lot. it wouldn’t accept any of my claims until i provided links to it.
      stuff like, “there are only unverified claims of attempts to deport people to cecot. if that had happened, there would be a lot of outrage and news articles. it would be illegal for the us to deport people to foreign prisons”

      so, outright denying each one of my claims that it adds up to fascism, until i went through the entire checklist of what defines fascism, with multiple sources… it took about a half hour.

      that then becomes training data and sources for the overall LLM, and will influence future conversations… with chuds who believe chatGPT is god, and who can’t provide reasonable negations to well sourced claims.

      i had a similar back and forth with Claude, now it’s defining trump as authoritarian without any further evidence required.

      llm’s have gotten a lot more complicated in recent years… they’re now little stacks of multiple neural networks working together to fix each other’s mistakes and whatnot.

      i think we’re getting closer to Immanuel Kant‘s model of consciousness.

      i find the AI Futures Project to have some pretty interesting ideas on where we’re headed
      (70% of all humans dead in 5 years)