• cannedtuna@lemmy.worldOP
    link
    fedilink
    English
    arrow-up
    0
    ·
    16 days ago

    I love Lutris, but man he really screwed himself with his immature approach to this.

    I was also suspicious that those Claude co-authorship would raise some issues in the open source community

    So he knew it would cause a fuss when it came to light he was using AI, so what does he do?

    I configured Claude code to skip the co-authorship line in git commits.

    Rather than be fully transparent upfront he hides it. And his response when called out on it was doubly childish.

    A lot of people didn’t like how I initially worded my response, something like “good luck figuring out what committed by me or by Claude now that the co-authorship is gone”.

    His justifications for the use of it are irrelevant.

    I still considered the Claude generated code as something I could have written, just slower.

    I also like using Claude to commit code I’ve written myself because it just writes good commit messages…

    And his reasoning for why he thinks people are upset at his use of AI shows he doesn’t understand the issue.

    I think a lot of the critics think that AI generated code should be flawless.

    This doesn’t invalidate the technology as a whole…

    Ignoring the many issues with AI that do invalidate it one of which is its inherently anti-FOSS which I guess he doesn’t seem to mind except under the terms that he might be sued for copying someone’s code.

    Also, there is enough open source code available that I would hope Anthropic doesn’t feel the need to train their models on potentially litigious code base.

    Lmao. Sure.

    In is original comments on Github he shifts the blame to overall capitalism but doesn’t see how continuing to pay into AI and further normalize its use as problematic.

    So he doesn’t seem to get it I guess.

    The rest of the interview is mostly just pro-AI.

    • JackbyDev@programming.dev
      link
      fedilink
      English
      arrow-up
      1
      ·
      14 days ago

      its inherently anti-FOSS

      My dream is that we get some court to rule that code created by AI is specifically created by a machine, not the prompter, so it’s in the public domain. I seriously doubt we’ll see that, but I can hope.

      (This is not a rebuttal, just discussion. I am not saying people should be pro-AI.)

  • BlackLaZoR@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    arrow-down
    1
    ·
    9 days ago

    So in a nutshell:

    • Dev uses AI and discloses AI commits.
    • Dev gets harrased by bunch of anti AI manchildren
    • Dev gets pissed off and removes AI disclousures

    Mob be like: How dare you!?

    • cannedtuna@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      9 days ago

      Interesting, you must be reading an article based in a different reality. The news and interview I read shows that he:

      • Used AI, and hid his use of it by stripping the Claude co-authorship tag

      so I configured Claude code to skip the co-authorship line in git commits.

      • A user notices an increase in AI assisted commits and opens an issue asking about

      • Dev after being called out on it responds petulantly that he already hid what he was doing and what are you gonna do about it

      Clanker-lovers be like: I see no problem here

      And, again this is only coming out because GitHub added a co-authorship tag to AI assisted commits, meaning if GitHub hadn’t forced the transparency, this would have all flown under the radar unnoticed.

      Link to GitHub issue where this started

  • chicken@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    0
    arrow-down
    1
    ·
    16 days ago

    A lot of people didn’t like how I initially worded my response, something like “good luck figuring out what committed by me or by Claude now that the co-authorship is gone”. But really, that’s the point. To this day, I haven’t had a report pointing to a specific piece of code saying “this is AI generated nonsense hallucination”, all the concerns have been about the broader use of AI. It doesn’t mean the code is perfect, some bugs were found. But those bugs were pretty much on the same level as some found in human submitted patches.

    Seems reasonable to not want to give extra ammunition to people who just want to harass the project about any use of AI as opposed to contributing anything constructive.