really stretching the meaning of the word release past breaking if it’s only going to be available to companies friendly with OpenAI
Orion has been teased by an OpenAI executive as potentially up to 100 times more powerful than GPT-4; it’s separate from the o1 reasoning model OpenAI released in September. The company’s goal is to combine its LLMs over time to create an even more capable model that could eventually be called artificial general intelligence, or AGI.
so I’m calling it now, this absolute horseshit’s only purpose is desperate critihype. as with previous rounds of this exact same thing, it’ll only exist to give AI influencers a way to feel superior in conversation and grift more research funds. oh of course Strawberry fucks up that prompt but look, my advance access to Orion does so well I’m sure you’ll agree with me it’s AGI! no you can’t prompt it yourself or know how many times I ran the prompt why would I let you do that
That timing lines up with a cryptic post on X by OpenAI Altman, in which he said he was “excited for the winter constellations to rise soon.” If you ask ChatGPT o1-preview what Altman’s post is hiding, it will tell you that he’s hinting at the word Orion, which is the winter constellation that’s most visible in the night sky from November to February (but it also hallucinates that you can rearrange the letters to spell “ORION”).
there’s something incredibly embarrassing about the fact that Sammy announced the name like a lazy ARG based on a GPT response, which GPT proceeded to absolutely fuck up when asked about. a lot like Strawberry really — there’s so much Binance energy in naming the new version of your product after the stupid shit the last version fucked up, especially if the new version doesn’t fix the problem
I'd love to get an interview with saltman and ask him to explain how they measure "power" of those things. What's the methodology? Do you have charts? Or does it just somehow consume 100x more power as in watts.
I'm already sick and tired of the "hallucinate" euphemism.
It isn't a cute widdle hallucination, It's the damn product being wrong. Dangerously, stupidly, obviously wrong.
In a world that hadn't already gone well to shit, this would be considered an unacceptable error and a demonstration that the product isn't ready.
Now I suddenly find myself living in this accelerated idiocracy where wall street has forced us - as a fucking society - to live with a Ready, Fire, Aim mentality in business, especially tech.
I think it's weird that "hallucination" would be considered a cute euphemism. Would you trust something that's perpetually tripping balls and confidently announcing whatever comes to them in a dream? To me that sounds worse than merely being wrong.
I think the problem is that it portrays them as weird exceptions, possibly even echoes from some kind of ghost in the machine. Instead of being a statistical inevitability when you're asking for the next predicted token instead of meaningfully examining a model of reality.
"Hallucination" applies only to the times when the output is obviously bad, and hides the fact that it's doing exactly the same thing when it incidentally produces a true statement.
I get the gist, but also it's kinda hard to come up with a better alternative. A simple "being wrong" doesn't exactly communicate it either. I don't think "hallucination" is a perfect word for the phenomenon of "a statistically probable sequence of language tokens forming a factually incorrect claim" by any means, but in terms of the available options I find it pretty good.
I don't think the issue here is the word, it's just that a lot of people think the machines are smart when they're not. Not anthropomorphizing the machines is a battle that was lost no later than the time computer data representation devices were named "memory", so I don't think that's really the issue here either.
As a side note, I've seen cases of people (admittedly, mostly critics of AI in the first place) call anything produced by an LLM a hallucination regardless of truthfulness.