They’re still lying. For fucks sake. It’s like they impaled you on a pike and just admitted “okay so we did prick you with that needle”.
ALL IT DOES IS HALLUCINATE. ALL IT DOES IS HALLUCINATE. ALL IT DOES IS HALLUCINATE. ALL IT DOES IS HALLUCINATE!
SOMETIMES the hallucinations happen to resemble reality. Just because a hallucination happens to look similar to reality does not make it real.
IT IS NOT PERCEIVING REALITY.
EVER!
EVER!
Ever.
It’s unfortunate that the word “hallucination” even got associated with LLMs in the first place… Hallucination refers to an erroneous perception. These chatbots don’t even have the capacity to perceive, let alone make errors. They have no senses, no awareness, no intentions, nothing. It’s just an inanimate machine crunching through a complex formula. Any resemblance to reality is purely coincidental.
Great. With that mathematically settled, can we spend all that money, computing hardware and electricity on something more worthwhile now?
Best I can do is crypto mining.
No.
Well, fuck us, I guess.
The OpenAI research identified three mathematical factors that made hallucinations inevitable: epistemic uncertainty when information appeared rarely in training data, model limitations where tasks exceeded current architectures’ representational capacity, and computational intractability where even superintelligent systems could not solve cryptographically hard problems
Perhaps it’s because I’m dumb, but does anyone else feel like there are lots of buzzwords here?
Like, does “cryptographically hard” refer to NP-hard or something??
This may be a mean thing to say, but could parts of this article be generated by an LLM?
I think it’s saying that LLMs won’t crack public key cryptography no matter how many times you ask them to please do it and they’ll sooner make up something instead
It all means something.
The first case is when it has limited samples on a particular topic, leaving it with insufficient data - “epistemically uncertain”
The second is when it tries to think about something, but it doesn’t have the ability to hold it in mind, the thought is in some way too large, that’s representational capacity.
The last is just, factoring stupid large numbers, and things of that nature.
We’re not going to read this one deeply, because we’re pretty sure this was demonstrated a while ago. Cool for OAI to catch up with the current state of the field, we guess?
Thank you.
They should have hired you to write this part of the article.