I’ve noticed an uptick in the number of pro-AI posts on this platform.
Various posts with titles similar to “When will people stop being afraid of AI” or “Can we please acknowledge AI was very needed for X”
Can’t tell if its the propaganda machine invading, or annoying teenage tech-bros who are detached from reality.


You mean Slopware “Development”?
(I opted to keep the “Development”, putting it in quotes as a sarcastic nod to the fact it’s no longer actual development)
Sort of. A friend used it to generate some “tests” of questionable quality, a cousin is using it to help her learn and use a DSL (my term, not hers) for interactive tasks for her students, another friend was using it for source code generation, but I don’t recall the specific results.
I disagree that it is no longer development, I see LLMs as yet another tool for generating code, and we’ve had generated “source” code since before C was standardized. I think the any code output by most LLMs is derivative of so many works under so many licenses that it is likely not possible to distribute it at all without violating some copyright and is certainly unacceptable for any Free Software project.; I think this is ethically true even if courts find LLM outputs are not derivative works or not subject to copyright protection at all – at least as long as copyright protects Disney. But, I know people that are working on a Free Software LLM, and “the Stack” provides enough information that you could provide all the necessary attributions for works derived from it.
While LLM hallucinations are a real concern, they can be less impactful when doing code generation because of all the automated static checks plus the culture of peer-review. But, I also tend to favor languages with static type systems.
Fair. There is a difference between using LLMs to generate boilerplate code customised to your context or provide a starting point if you’re stuck on a given problem and struggle to find a different perspective for approaching it, and using it to get around having to do mental work.
My term is intended for the kind of vibe coding where there is little, if any, technical skill involved and people are just letting LLMs slop together code without meaningful code quality assurance. In those cases, I don’t think it warrants recognition as development. If it produces workable results, cool. Call it software generation.
Using it as a learning assistant would probably be the most justified use case in my opinion. I have my reservations whether it is suitable for that purpose but I don’t know enough about the specific way it is applied to comment on that. If it produces training code that isn’t directly published you dodge the legal iffyiness, and if it helps build skills, that solves the “relying on AI makes you unlearn skills” issue.