- 4 Posts
- 8 Comments
brianpeiris@lemmy.caOPto
Fuck AI@lemmy.world•Florida officials investigate ChatGPT, OpenAI over alleged role in FSU shootingEnglish
5·2 days agoI understand you’re trying to consider both sides of this for the sake of argument, but the issue I have with it is that it is justifying current real world harm in the name of hypothetical (arguably unlikely) future benefit.
brianpeiris@lemmy.cato
Fuck AI@lemmy.world•Found this interesting reading about AI. Well informed and unminced wordsEnglish
1·2 days agoMaybe voice over, narration, stock photography, copy writing.
brianpeiris@lemmy.cato
Fuck AI@lemmy.world•"The Local Alternative" (Art by David Revoy)English
16·3 days agoI’m not so sure that power usage should be dismissed so easily just because it is distributed instead of centralized. The slop per watt rate may even be worse than at a datacenter. Fundamentally, we should care more about efficiency.
Imagine a panel of 20 standard LED light bulbs. That’s 180 watts, roughly the equivalent of GPU usage while a local LLM is doing any work. If you keep that in mind, then you have to ask yourself if the benefit you’re getting out of your local LLM is really worth that energy cost. Now, monetarily speaking, that’s not a ton of money, because electricity is cheap, but would you flip that switch for the duration of the task you’re performing? What if you could use conventional non-LLM methods to do it instead? Would that be more efficient? And where is your electricity coming from? Is it a solar farm, or a coal plant?
How was your local LLM trained? Was there copyrighted material in its training data set? Were low-wage workers asked to sift through horrendous content to clean up the data?
We need to consider the externalities, even when using local LLMs. We moved so quickly from the initial release of ChatGPT to now, that we never stopped to ask those questions. They remain unanswered until someone cares enough to think.
brianpeiris@lemmy.cato
Fuck AI@lemmy.world•"The Local Alternative" (Art by David Revoy)English
134·3 days agoLocal slop is still slop
brianpeiris@lemmy.cato
Technology@lemmy.world•Sam Altman May Control Our Future—Can He Be Trusted?English
62·5 days agoWhy is this so downvoted?
brianpeiris@lemmy.cato
Fuck AI@lemmy.world•AI and the human mind: only one is a black box [Paper Request]English
3·5 days agoHere you go. It’s very short:
In their recent Comment article, Eddy Keming Chen et al. argue that current large language models (LLMs) already display human-level intelligence, based on behavioural evidence (see Nature 650, 36–40; 2026). I suggest that this framing obscures a fundamental asymmetry.
The authors treat human minds and LLMs as two comparable systems: effectively, two black boxes that are evaluated by their outputs. But this symmetry is fictitious. Human intelligence is a natural phenomenon, from which the very concept of intelligence is reconstructed. The generative mechanisms of the human mind are not yet fully understood. By contrast, LLMs are systems that are designed and built. Their operating principles — statistical optimization of token prediction — are known, even if internal complexity makes it difficult to retrace the steps that produce the outputs. LLMs are complex, but they are not inherently mysterious black boxes.
When we attribute intelligence to humans, no alternative explanation for their cognitive behaviour is available, nor is it needed. But there is a sufficient explanation for the behaviour of LLMs, which does not infer understanding or intelligence: the known generative mechanism itself.
This does not mean that artificial general intelligence is impossible in principle. But establishing it would require evidence that the cognitive behaviour of a system cannot be fully accounted for by its known generative mechanism alone.
Here’s the vid: https://www.youtube.com/shorts/GAMkRJdu9j4



I’m very surprised they’re even doing this.