Young people have grown increasingly skeptical of artificial intelligence, even those who use it daily, according to a new Gallup poll of more than 1,500 people aged 14 to 29.

There is no decline in AI use among Gen Zers, but there is also no increase since the same poll was conducted in 2025. The latest poll found that AI use was plateauing among young users, accompanied by rising concern about the technology’s consequences.

The findings are significant because Gen Z is “the generation most likely to enter or grow within the workforce over the next decade,” the report notes, meaning that their adoption could determine the trajectory of broader societal AI adoption. Gen Z has already overtaken Boomers in the workforce. Right now, the AI world is preparing for a massive jump in expected demand, and the top tech and financial companies are investing billions upon billions of dollars into building out the supply. Experts have warned that if demand does not pan out exactly as expected in the short term, then it could have disastrous consequences for the economy.

      • Hakuso@scribe.disroot.org
        link
        fedilink
        arrow-up
        4
        ·
        2 days ago

        An institution uses a local model to help organize massive amounts of data, not crawl the whole web stealing everything in sight while being anthropomorphized by a corporations trying to sell you a friend or a waifu…

        Which you won’t be able to run because RAM is $500 now.

        There are valid uses for LLMs, but I think everyone who calls it “AI” is definitely a scam.

          • leftzero@lemmy.dbzer0.com
            link
            fedilink
            arrow-up
            4
            ·
            2 days ago

            I didn’t say Open AI’s or Nvidia’s, nor their investors (though, to be fair, Nvidia will probably still end up profiting once the bubble pops, the bastards; after all, in a gold rush the ones selling the mining equipment are the ones who end up making a profit).

            I specifically mentioned the scammers on top, who will grab the cash and run as soon as it starts popping.

            The economy will end up worse than in the 1929 crash, sure, but not for those bastards.

            So, yeah, it can, and it is, because it’s what the whole scam was designed for.

      • CanIFishHere@lemmy.ca
        link
        fedilink
        arrow-up
        4
        arrow-down
        5
        ·
        2 days ago

        I have a buddy who uses AI to read through contracts to identify high risk commitments that might cost the company money. There are thousands more uses .

          • CanIFishHere@lemmy.ca
            link
            fedilink
            arrow-up
            2
            ·
            2 days ago

            I thought it would be clear because it’s a contract, but we are talking about financial risks, not health risks. He is using a corporate trained AI client. When the AI client finds an issue he (the human) still reviews it. According to my buddy his productivity has improved by over 25%.

            • hark@lemmy.world
              link
              fedilink
              arrow-up
              2
              ·
              1 day ago

              If the AI has missed risks and he didn’t bother checking (since this is where the added productivity comes from) then the company gets to enjoy those risks.

        • quack@lemmy.zip
          link
          fedilink
          arrow-up
          10
          arrow-down
          1
          ·
          2 days ago

          That’s horrifying. I really hope he’s triple-checking everything.

          • CanIFishHere@lemmy.ca
            link
            fedilink
            arrow-up
            2
            ·
            edit-2
            1 day ago

            He reviews anything the AI flags. As I already mentioned, the AI client is looking for financial risks. ie: a contract committing the company to something it doesn’t have the capability of delivering. I used to do something very similar. One obvious example would be a customer asking for unlimited liability. Company can’t commit to that because it could bankrupt the company.

    • o_oli@lemmy.world
      link
      fedilink
      English
      arrow-up
      12
      ·
      3 days ago

      For sure, it’s amazing for some things. But it also appears to do more than you think it does until you become familiar with it. I think everyone new to using AI should quiz it on topics they are knowledgeable in, to realise how much shit it makes up.

      Also yeah I’m specifically talking about LLMs because I think that’s 95%+ of AI usage right now in volume.

      • Grandwolf319@sh.itjust.works
        link
        fedilink
        arrow-up
        16
        arrow-down
        1
        ·
        3 days ago

        For sure, it’s amazing for some things.

        I’m still skeptical about this.

        Most of those things are usually due to the alternative being intentionally bad.

        Like google becoming bad, or bad company documentation, or corporate speak emails that’s could just be straight to the point.

        • o_oli@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 days ago

          Maybe? But to give an example of how I think it’s been pretty cool, is summarising my Dungeons & Dragons session notes, and being available to answer questions, or spin up ideas on the fly. I can take horrible and inconsistent notes with holes in them, but an LLM straightens them all out into any format I need. If I need a small piece of world building and ran out of time I can get it to spit a few ideas at me. Often generic ideas and tropes are actually what I am after. If I forgot something that happened 6 months ago I can just…ask it. It can pull up stuff I noted offhand and totally forgot about no problem. This sort of use where it’s like an admin assistant, and being inaccurate is totally unimportant, it’s a good tool.

          Maybe that’s a really niche example but it’s one of the few cases where I can see long term use with zero downsides.

          Ultimately it’s powerful at consolidating large volumes of information and allowing the user to probe at that information. As long as the use case can tolerate inaccuracies and hallucinations then it’s fine.