OpenAI’s big pitch for its new o1 LLM, a.k.a. “Strawberry,” is that it goes through an actual reasoning process to answer you. The computer is alive! The paperclip apocalypse is imminen…
I don't think it's alive, I think it's talking to its self. They're making a Chinese whisper machine, and it will remain so until it has embodiment, subjective and changing goals, and a will of it's own.
That's part of intelligence, but it's still a reverse engineering take on things.
In actuality we have intelligence because our threat detection and social protection/survival goals became abstract enough for self-awareness to occur.
I figured he was talking about Searle's Chinese room thought experiment. Searle sucks though, so that's probably also racist (in addition to being stupid.)
That’s OpenAI admitting that o1’s “chain of thought” is faked after the fact. The “chain of thought” does not show any internal processes of the LLM — o1 just returns something that looks a bit like a logical chain of reasoning.
I think it's fake "reasoning" but I don't know if (all of) OpenAI thinks that. They probably think hiding this data prevents cot training data from being extracted. I just don't know how deep the stupid runs.