- cross-posted to:
- fuck_ai@lemmy.world
- cross-posted to:
- fuck_ai@lemmy.world
cross-posted from: https://lemmy.world/post/44699253
This is clearly a sign that the product failed to draw in enough customers and its viability was overhyped.
Hopefully, it is the start of the AI bubble bursting.
Finally, a good news
It’s so they can repurpose that capacity for developing robots. It’s not good at all.
OpenAI told the BBC on Wednesday that it has discontinued Sora so that it can focus on other developments, such as robotics “that will help people solve real-world, physical tasks”.
Robots aren’t like software, it’s immediately obvious when they don’t work the way they’re advertised whereas chatbots can trick people into thinking they’re way more useful than they actually are. The “fake it till you make it” “move fast and break things” ethos of tech doesn’t work when there’s actual, physical evidence that shit’s busted.
Unpopular Opinion Incoming
I was assigned at work to evaluate a few LLMs for potential adoption, so I spent a solid week doing so.
Most of the “AI is broken and doesn’t work” on here is solid echo chamber cope. It’s more competent than several of my coworkers, though it’s thankfully not ready to replace knowledge workers as it requires a knowledge baseline to best direct it and evaluate its answers.
I still advised against using it for multiple reasons, including ethics, but much of Lemmy is playing make believe about the actual capabilities of LLMs.
Cool anecdote. Every time we actually see real data, though, the numbers don’t reflect much in the way of productivity gains or increased efficiency or better output. People say that LLMs are useful because it feels useful, but we aren’t seeing actual usefulness. The most recent study out of Duke University observes “a productivity paradox, in which perceived productivity gains are larger than measured productivity gains, likely reflecting a delay in revenue realizations.”
A delay. Sure.



