I’ve noticed an uptick in the number of pro-AI posts on this platform.

Various posts with titles similar to “When will people stop being afraid of AI” or “Can we please acknowledge AI was very needed for X

Can’t tell if its the propaganda machine invading, or annoying teenage tech-bros who are detached from reality.

  • mirshafie@europe.pub
    link
    fedilink
    arrow-up
    1
    ·
    16 hours ago

    The strawman-building is that you’re extrapolating really, really far based on a tiny comment, and so you’re making wild assumptions that aren’t relevant to the conversation. The accusation that I’m hoping to be able to use LLMs to find bugs for nefarious reasons is far out. In fact, ironically, your text reads like something a badly (or maliciously) configured LLM would produce.

    I never claimed that somehow, unprompted, an LLM went out and found a bug. But LLMs are increasingly used as important tools in finding all kinds of problems in code. Going forward, as we get better at how to use these models, more bugs will likely be found. And if we can train other ML models on other kinds of data but with similar size, I think we’d be right to expect a lot.

    I have no doubt that misuse of LLMs and other machine learning models is widespread. The parapsychology aside, I’m worried about how it’s being used in war and targeting, which will only get worse.

    However I think it’s a bit disingenuous to portray LLMs as glorified search engines or autocorrect. It’s not wrong, it’s technically correct, but the utility is way beyond find-and-replace. It’s a bit like calling humans glorified tapeworms. Doesn’t really make for an interesting discussion.

    I also think you’re wrong in asserting that LLMs or other ML models can only be useful for researchers on the edge of their fields. I guess we’ll see.