"Sora serves as a foundation for models that can understand and simulate the real world, a capability we believe will be an important milestone for achieving AGI." I can't believe they're still claiming that generative models can "understand" the world. They can't understand anything. They are just a fancy statistical model for correctness probability against previous known good outputs. You don't get AGI from that. You can't get AGI from that. Irresponsible claims.
@CatherineFlick@mastodon.me.uk the “I” in openAI stands for “irresponsible”. Fuckers.
@CatherineFlick It's so frustrating to me that the whole field is behaving in such an irresponsible way. No amount of data is going to provide any kind of truth understanding guarantee. The machinery is not designed for that! But it's how it's being sold!
It would be different if folks were working on trying to ground LLMs with some kind of truth model, like a knowledge graph or something. Instead everyone is handwaving and promising AGI. It's dishearteningly unscientific.
@CatherineFlick The whole thing is really bumming me out. It hits too close to home, and it's going to damage ML efforts long-term.
@proprietous yes! It’s so frustrating. A noble field of computer science reduced to fairy tale and grift.
@CatherineFlick @proprietous One of our great computer scientists once said “people think computing moves quickly, but it’s just the wheel of reincarnation turns faster” - so, after the last AI winter due to excessive claims folks decided to do a new thing called ML which was described to me as “well founded statistical techniques to process high dimensional data” and nothing to do with AI - I just wonder what they’ll call the backlash this time.
@drdrmc sorry for the 'well actually' but ML was first, based indeed on statistics, such as early automatic number recognition on cheques (remember them?).
@ExcelAnalytics @drdrmc Good point! Seems that both phrases were first used in CS in the 1950s. But it does seem that after the AI winter of the 80s, ML became the more popular name/lens through which to view statistical learning, at least until now.
Honestly, I wish AI had stayed in the dustbin. It's got too much baggage.
Which is why I had my brainstorm today:
http://brander.ca/stackback#aworkers
...that we must drop the entire term "AI" because they perform nothing like "intelligence" does, not even the intelligence of a mouse.
Call them "Artificial Workers" to emphasize their social role: taking over work from humans.
@RoyBrander I like the idea but I think “workers” anthropormorphises them too much. I’d have to think of a better alternative!
@CatherineFlick My LLM of choice is HeyPi, which is very good at accurately answering questions, even complicated ones, if the answer can be found somewhere on the internet.
But the wheels fall off if you ask it a question that isn't answered online. Maybe it'll say it couldn't find anything, but sometimes it just makes shit up.
Then if you ask it, "That doesn't sound right to me, do you have any sources?" it makes up sources.
Recent example: https://pi.ai/s/9uSNdo2o4RAexkrBGFyZp
This is not understanding.
@jik Yeah, they're deluding themselves (and others, and I'd suggest maliciously) if they think that these models are capable of having any sort of understanding of what they are saying, let alone the world. It's just pure fantasy. (And possibly fraud; IANAL)
@CatherineFlick I certainly don't claim to be an expert, but to the extent that AGI is a goal and one we should be trying to achieve, my gut instinct is that LLMs are probably an AGI dead-end. I'm worried that they're sucking all the air out of the room and preventing research into other approaches, similar to what the amyloid theory has done to Alzheimer's research.
It's also having a huge negative environmental impact, worsening mis/disinfo by orders of magnitude, and enshittifying everything.
@jik Yeah, there are definitely places where machine learning models can be super useful and powerful but yeah, this search for a new tech god is fruitless and wastes a lot of energy that could be better used on these more suitable purposes!