• realitista@lemmus.org
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    6
    ·
    7 days ago

    I mean, the proof is in the pudding. Go ask Google Gemini to do some research on a topic you know something about and check it’s work. The result will be somewhere below expert level and above the abilities of a large portion of the population to do that research themselves. If you want some proof, go ask random kids you didn’t really know that well in high school to do the same research. Many of them will fail. If you are relatively intelligent, I’m sure you know which ones it will be too.

    The reason I know that these randos on the internet don’t know how modern AI’s work is because the experts also don’t know (I still work with many of them). They know how they constructed the algorithms which construct the neural networks, but they don’t really understand how the neural networks themselves are composed or work (though progress is being made here)

    To take your gasoline analogy, it’s as if someone comes along and says “gasoline can never explode, it just kind of barely burns”. While, you, who work in the field and know a bit about stoichiometry, know how to mix it with air, compress it and combust it. You cannot explain to him what exactly is happening on the molecular level, but you know he’s wrong because you’ve worked in the field enough to know how to use it to produce useful results, and you have worked with the experts that created the stochiometric equations that prove it.

    • Carnelian@lemmy.world
      link
      fedilink
      arrow-up
      7
      ·
      7 days ago

      I’ve done several assessments of the output of popular llms in my field of expertise. I generally conclude that they are “worse than worthless”, because they actively try to persuade you of false information.

      Your whole thesis about people whose output is “lesser” than llms is totally misguided. Yes there is a systemic research and comprehension issue. No, the AI doesn’t help people with it. What I’ve observed is that people don’t really ever defer to the AI if it coincidently contradicts their beliefs, they just coax it until it says whatever they want, then end up problematically overconfident because “the ai told them so”

      I could keep replying in regards to the unmotivated school children and the inappropriate reformatted analogy but what’s the point if you’re just gonna be a broken record? We all understand that you think most people are morons and that you and your buddies have deep talks about AI in which you’ve concluded that nobody can really “know” anything well enough to comment on their capabilities, but in spite of this you personally are able to not just “know” what it is capable of but even how it stacks up against against different types of humans. The line of reasoning is totally absurd

      • realitista@lemmus.org
        link
        fedilink
        English
        arrow-up
        1
        ·
        7 days ago

        These are compelling points and you are swaying my belief with them. I’d like to do a similar study and see the results .

        I certainly do not believe anything I said applied to school children. Honestly I think they should be kept entirely away from any form of conversational AI until they have a fully developed frontal cortex and have been taught how to conduct research and think critically for themselves