• ZDL@lazysoci.al
    link
    fedilink
    arrow-up
    71
    arrow-down
    4
    ·
    2 days ago

    Industrial Revolution: New machines almost instantly made factory owners 1000 to 10000 times wealthier.

    LLMbeciles: New machines can’t count the 'r’s in “strawberry”.

    • pivot_root@lemmy.world
      link
      fedilink
      arrow-up
      36
      arrow-down
      2
      ·
      2 days ago

      New machines can’t count the 'r’s in “strawberry”.

      No, they can! They just first need to have a system prompt instructing them to generate and run a python script to do it.

      And yet, it’s us meatbags being called inefficient.

    • vividspecter@aussie.zone
      link
      fedilink
      arrow-up
      29
      arrow-down
      1
      ·
      2 days ago

      Also, the industrial revolution fucked over a lot of people during the transition period so even if it was an accurate comparison it’s rather callous to celebrate it.

      • ZDL@lazysoci.al
        link
        fedilink
        arrow-up
        13
        ·
        1 day ago

        This is precisely why I am suspicious of any and all “disruptive” technologies.

        If this technology is so disruptive that it will generate unparalleled wealth for society, then that’s enough wealth that you can afford to keep paying the people about to get displaced and their livelihood destroyed. Don’t want to do this? Fuck your “disruption”.

        (And if it’s like LLMs, it won’t be positively disruptive in any light. It’s just a Ponzi scheme for the highest stakes ever.)

      • Tollana1234567@lemmy.today
        link
        fedilink
        arrow-up
        5
        arrow-down
        1
        ·
        1 day ago

        it did have a value in the end, but LLM only just fucks over people and pollutes the planet more aggressively.

    • sheetzoos@lemmy.world
      link
      fedilink
      arrow-up
      2
      arrow-down
      12
      ·
      23 hours ago

      Your echo chamber told you something, and you failed to verify that info. You’re no better than the “LLMbeciles”.

      • jj4211@lemmy.world
        link
        fedilink
        arrow-up
        3
        ·
        16 hours ago

        The thing is, that broadly these sorts of hiccups happen all the time, but every time one of them escalates to ‘meme’ status, they can institute covering for it in pretty short order.

        When you use them routinely, you see them do the hiccups regularly on random things you weren’t expecting, but if one of those hiccups goes viral, then it stops working.

        I managed to get in barely in time to see the seahorse emoji before the meme became self-defeating.

        The viral instances only work very briefly to illustrate a behavior, as very well known specific examples will get covered. In your case, at one point suddenly all the LLMs were really good at knowing the letters in strawberry, but you ask about other words they would fall over because they only had that specific thing there. By now, I suspect most have implemented a scheme to ensure a more appropriate mechanism handles counting letters in a word, to spare the embarassment.

        • sheetzoos@lemmy.world
          link
          fedilink
          arrow-up
          1
          arrow-down
          1
          ·
          edit-2
          14 hours ago

          I’m glad you’ve taken a nuanced approach to the issue. The technology is constantly changing and there are lots of genuine reasons to be concerned about AI. This just isn’t one of them anymore.

          • jj4211@lemmy.world
            link
            fedilink
            arrow-up
            2
            ·
            2 hours ago

            I wouldn’t say the inability to count the 'r’s in strawberry was ever a ‘concern’, but a demonstrator. It demonstrated two things.

            One, a quirk of how tokens work, which innately is a pretty benign limitation in and of itself, perhaps a bit amusing. We don’t really need GenAI help to do nitty gritty stuff with the letters.

            The more troubling facet was the fact it would spit out something like “There is one r in strawberry” instead of “Due to limitations of the technology, that answer is unavailable”. The tendency to spew something that structurally resembles the desired result with apparent confidence and certainty despite no basis for it being true is on display there. This is absolutely still the case broadly. The challenge being humans aren’t used to dealing with being bombarded with that baseless certainty and have a hard time gauging the credibility when facts and fiction are presented with equal apparent confidence. Certainly some business leaders and politicians thrive on the confident but dumb answer, but generally we recognize those as bad scenarios, and LLMs firmly share that trait.

      • ZDL@lazysoci.al
        link
        fedilink
        arrow-up
        2
        ·
        16 hours ago

        Seriously? The strawberry thing is well-documented (and personally tested). But of course the people who make the systems don’t ever change them to fix these humiliating errors.

        This is the best you can do to defend your slop machines? HAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHA!

        “LLM can too count the Rs in Strawberry (now)!” isn’t the flex you seem to think it is, Sparky.

        • sheetzoos@lemmy.world
          link
          fedilink
          arrow-up
          1
          arrow-down
          1
          ·
          14 hours ago

          There are lots of great reasons to be critical of AI. Point to the power consumption, point towards the safety issues, but avoid using out of date examples as they weaken your stance.

          • ZDL@lazysoci.al
            link
            fedilink
            arrow-up
            2
            ·
            11 hours ago

            Why don’t you point to the things you want to stress and let others point to what they want to make fun of, Sparky?

            Since, however, the point of the strawberry thing is escaping you, let me explain it.

            EARLY automation was a game changer, pretty much from the day it was introduced. The spinning jenny magnified worker productivity 8-fold in its very first model, and expanded rapidly thereafter. The production of yarn dramatically increased while at the same time the price of any individual unit of it plummeted. A chronic shortage of weft yarn that had been limiting the production of the weaving industry at large suddenly vanished. Any company that put a spinning jenny into their production line saw instant, massive benefits.

            EARLY LLMs were amusing idiots recommending glue on pizza and eating rocks. Later LLMs were amusing idiots that couldn’t count the letters in strawberry. Current LLMs are amusing idiots that can’t find the obvious solution to a trivial problem that I literally just tested before posting this:

            At NO POINT have LLMs done anything anywhere near as impactful and beneficial as the Industrial Revolution did. They did, however, match and then exceed all the bad effects of the Industrial Revolution, so hey, at least they accomplished something!

            So, Sparky, though the strawberry thing may have been clumsily fixed, here’s another just-taken snapshot that shows even the latest LLMbeciles are still trivial to fool, hallucinating the dumbest fucking thing that a TODDLER could find the answer to!:

            But hey, now they can count the Rs in “strawberry” finally! Good job! Two big thumbs up!

            • jj4211@lemmy.world
              link
              fedilink
              arrow-up
              1
              ·
              2 hours ago

              Your first one is actually a pretty good example of something a lot of LLM fanatics won’t believe. They believe in “reasoning” models as actually reasoning, and are not receptive to the reality that while it looks like reasoning, and even obviously the expenditure of more tokens is resulting in better final results, it isn’t “reasoning”.

              It declares step 3 to be ‘botched’ and then does the exact same step 3 and declares it good, and from an actual reasoning perspective it makes no sense, as it would have been step 4 that was about to be a mistake.

              So it arrives at the correct sequence, but it clearly didn’t get through by “logic-ing” it, it just modeled that a mistake should be acknowledged around step three, because that’s when folks generally flub it, and presented the rationale that is always provided to explain. It doesn’t make actual sense, but it has the effect of the correct answer being reached, but not through actual abstract reasoning.