• ZDL@lazysoci.al
    link
    fedilink
    arrow-up
    2
    ·
    10 hours ago

    Why don’t you point to the things you want to stress and let others point to what they want to make fun of, Sparky?

    Since, however, the point of the strawberry thing is escaping you, let me explain it.

    EARLY automation was a game changer, pretty much from the day it was introduced. The spinning jenny magnified worker productivity 8-fold in its very first model, and expanded rapidly thereafter. The production of yarn dramatically increased while at the same time the price of any individual unit of it plummeted. A chronic shortage of weft yarn that had been limiting the production of the weaving industry at large suddenly vanished. Any company that put a spinning jenny into their production line saw instant, massive benefits.

    EARLY LLMs were amusing idiots recommending glue on pizza and eating rocks. Later LLMs were amusing idiots that couldn’t count the letters in strawberry. Current LLMs are amusing idiots that can’t find the obvious solution to a trivial problem that I literally just tested before posting this:

    At NO POINT have LLMs done anything anywhere near as impactful and beneficial as the Industrial Revolution did. They did, however, match and then exceed all the bad effects of the Industrial Revolution, so hey, at least they accomplished something!

    So, Sparky, though the strawberry thing may have been clumsily fixed, here’s another just-taken snapshot that shows even the latest LLMbeciles are still trivial to fool, hallucinating the dumbest fucking thing that a TODDLER could find the answer to!:

    But hey, now they can count the Rs in “strawberry” finally! Good job! Two big thumbs up!

    • jj4211@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      56 minutes ago

      Your first one is actually a pretty good example of something a lot of LLM fanatics won’t believe. They believe in “reasoning” models as actually reasoning, and are not receptive to the reality that while it looks like reasoning, and even obviously the expenditure of more tokens is resulting in better final results, it isn’t “reasoning”.

      It declares step 3 to be ‘botched’ and then does the exact same step 3 and declares it good, and from an actual reasoning perspective it makes no sense, as it would have been step 4 that was about to be a mistake.

      So it arrives at the correct sequence, but it clearly didn’t get through by “logic-ing” it, it just modeled that a mistake should be acknowledged around step three, because that’s when folks generally flub it, and presented the rationale that is always provided to explain. It doesn’t make actual sense, but it has the effect of the correct answer being reached, but not through actual abstract reasoning.