• Encephalotrocity@feddit.online
    link
    fedilink
    English
    arrow-up
    1
    ·
    11 days ago

    Perhaps the most discussed technical detail is the “Undercover Mode.” This feature reveals that Anthropic uses Claude Code for “stealth” contributions to public open-source repositories.

    The system prompt discovered in the leak explicitly warns the model: “You are operating UNDERCOVER… Your commit messages… MUST NOT contain ANY Anthropic-internal information. Do not blow your cover.”

    Laws should have been put in place years ago to make it so that AI usage needs to be explicitly declared.

  • CorrectAlias@piefed.blahaj.zone
    link
    fedilink
    English
    arrow-up
    0
    ·
    11 days ago

    Be careful not to introduce security vulnerabilities such as command injection, XSS, SQL injection, and other OWASP top 10 vulnerabilities. If you notice that you wrote insecure code, immediately fix it.

    Lmao. I’m sure that will solve the problem of it writing insecure slop code.

    • filcuk@lemmy.zip
      link
      fedilink
      English
      arrow-up
      1
      ·
      11 days ago

      It doesn’t fix it, but as stupid as it looks, it should actually improve the chances.
      If you’ve seen how the reasoning works, they basically spit out some garbage, then read it again and think whether it’s garbage enough or not.
      They do try to ‘correct their errors’, so to say.

    • hactar42@lemmy.ml
      link
      fedilink
      English
      arrow-up
      0
      ·
      10 days ago

      I think saw one of the keywords was dumbass. And another looked for you calling it a piece of shit

      • smeenz@lemmy.nz
        link
        fedilink
        English
        arrow-up
        1
        ·
        10 days ago

        Something in a song on my car radio triggered my phone to wake google yesterday and I casually told it to fuck off, and it replied “I’m sorry you’re upset. You can send feedback”