I’m just one random nerdy trans girl. …Oh come on, you’ve been around fediverse, surely you’ve seen us around?

Mastodon: @umbraroze@tech.lgbt

  • 1 Post
  • 8 Comments
Joined 3 years ago
cake
Cake day: September 18th, 2023

help-circle



  • Rose@slrpnk.nettomemes@lemmy.world🥹
    link
    fedilink
    arrow-up
    1
    arrow-down
    1
    ·
    1 day ago

    The only problem I see with it is the usual pitfall of LLMs.

    “We’re not stupid”, the AI fans say. “Of course we look through the output carefully, and will meticulously make sure it makes sense.” And then they don’t actually do that.

    “It’s only one of the tools at our disposal, don’t worry!” …so what other tools did these giant AI fans use, again? Oh.

    I mean, if people genuinely actually use LLMs just as one tool among many, and actually genuinely find a way to use them to improve their own output, it’s great! (For example, I’m using AI for image captioning and it cuts down a lot of work.)

    I’m just saying that I often need to take claims that people are using these tools responsibly with some grains of salt, though.



  • Ohhhhhh the newbies don’t remember EsounD (Enlightenment Enlightened Sound Daemon). Basically, it was an attempt at doing PulseAudio-esque stuff way back in the OSS era. Which is to say, it just supported software mixing of multiple audio sources, because OSS usually only allowed single process to output audio. EsounD was janky and didn’t work well, obviously. Probably the neatest thing about it was that it exposed the mixed output stream to any other app, so that made visualisers much easier to make (edit: another thing that newbies in this day and age don’t realise, but I cannot emphasise enough how crucial visualisers were for the late 1990s / early 2000s music experience). ALSA basically supported hardware mixing (if available) out of the box, so of course it immediately became my favourite.