Google and OpenAI staff, many of them AI researchers, have signed an open letter saying they share Anthropic’s red lines. Privately OpenAI bosses agree.

  • technocrit@lemmy.dbzer0.com
    link
    fedilink
    arrow-up
    1
    ·
    1 month ago

    Props to Anthropic versus other grifters…

    But they’re not holding any line. “AI” is being researched and developed almost entirely for the sake of imperialism, prisons, hating “china”, etc.

  • takeda@lemmy.dbzer0.com
    link
    fedilink
    arrow-up
    0
    ·
    1 month ago

    Looks like the Friday deadline was set, because they needed anthropic for the attack?

    Am I missing something? How would LLM be useful for operations like this?

    • technocrit@lemmy.dbzer0.com
      link
      fedilink
      arrow-up
      1
      ·
      1 month ago

      As everyone else says, to avoid accountability. That’s the real “killer” app here.

      Just look at use of “AI” in the ongoing gaza genocide, zio terror attacks on lebanon, etc.

      This is already normal.

  • thesdev@feddit.org
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    I found it kind of funny to see some people scrambling to cancel their ChatGPT subscriptions after OpenAI swooped in to take the contract Anthropic refused to, as if this is the first moral problem they’ve found with using ChatGPT, but what’s even more bizarre than that is seeing a post on this community celebrating an AI company.

    • Turret3857@infosec.pub
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 month ago

      We can not stop these companies by ourselves from being shitty, we can be happy when one of them chooses to be .01% less shitty than the rest.