• Sunless Game Studios@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    29 days ago

    I know at least one writing major who won an award from his volunteer work at Wikipedia. He did it as a hobby. They don’t really need AI, they need people like him.

      • Sunless Game Studios@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        7 days ago

        If you don’t know, it’s not easy to explain completely. Basically, they offered bounties for poorly written articles, or offered awards if a rewrite was done particularly well enough and recognized as such.

  • infeeeee@lemmy.zip
    link
    fedilink
    English
    arrow-up
    0
    ·
    29 days ago

    Saved you a click:

    After much debate, the new policy is in effect: Wikipedia authors are not allowed to use LLMs for generating or rewriting article content. There are two primary exceptions, though.

    First, editors can use LLMs to suggest refinements to their own writing, as long as the edits are checked for accuracy. In other words, it’s being treated like any other grammar checker or writing assistance tool. The policy says, “ LLMs can go beyond what you ask of them and change the meaning of the text such that it is not supported by the sources cited.”

    The second exemption for LLMs is with translation assistance. Editors can use AI tools for the first pass at translating text, but they still need to be fluent enough in both languages to catch errors. As with regular writing refinements, anyone using LLMs also has to check that incorrect information hasn’t been injected.

    • Rioting Pacifist@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      29 days ago

      AIbros: we’re creating God!!!

      AI users: it can do translation & reformating pretty well but you got to check it’s not chatting shit

      • halcyoncmdr@piefed.social
        link
        fedilink
        English
        arrow-up
        0
        ·
        29 days ago

        The takeaway from all LLM-based AI is the user needs to be smart enough to do whatever they’re asking anyway. All output needs to be verified before being used or relied upon.

        The “AI” is just streamlining the process to save time.

        Relying on it otherwise is stupid and just proves instantly that you are incompetent.

        • Zagorath@quokk.au
          link
          fedilink
          English
          arrow-up
          0
          ·
          29 days ago

          the user needs to be smart enough to do whatever they’re asking anyway

          I’m gonna say that’s ideal but not quite necessary. What’s needed is that the user is capable of properly verifying the output. Which anyone who could do it themselves definitely can, but it can be done more broadly. It’s an easier skill to verify a result than it is to obtain that result. Think: how film critics don’t necessarily need to be filmmakers, or the P=NP question in computer science.

          • Pyro@programming.dev
            link
            fedilink
            English
            arrow-up
            0
            ·
            29 days ago

            But if the output has issues, what’re you going to do, prompt it again? If you are only able to verify but not do the task, you cannot correct the AI’s mistakes yourself.

            • 42firehawk@fedinsfw.app
              link
              fedilink
              English
              arrow-up
              1
              ·
              7 days ago

              In Wikipedias case, you just fail to make an edit/new post. So you can verify if Ai can make a usable post up to standard with people who can verify but not make, hopefully saving enough time and bulk to help that group learn to make properly, as well as leave the ones Ai will fuck up to people who can do it right.