Skip Navigation

ChatGPT can get worse over time, Stanford study finds | Fortune

By June, “for reasons that are not clear,” ChatGPT stopped showing its step-by-step reasoning.

25 comments
  • Potentially hot take: LLMs are reaching a dead end before they could even become remotely useful. The very approach boils down to brute force - you force-feed it more data until the problem goes away... and this works until it doesn't, and in this case it's actually breaking stuff.

    Based on the output of those models, it's blatantly obvious that they don't use the data well at all; the whole thing is a glorified e-parrot, instead of machine learning. And yet, as the text shows, it's almost impossible to say why - because the whole thing is a blackbox.

    • Based on the output of those models, it’s blatantly obvious that they don’t use the data well at all; the whole thing is a glorified e-parrot instead of machine learning

      I’m curious to understand what you meant by this—specifically about not using the data well, and being ‘a glorified e-parrot instead of machine learning’. Would you not count the techniques being used in LLMs as machine learning?

      • A parrot is rather good at repeating human words. Some can even sing whole songs. But even when you compare exceptional parrots with young and typical human kids, it's clear that parrots have a really hard time associating words with concepts; or, in other words, learning instead of just memorising.

        And LLMs behave like specially dumb electronic parrots - they're good repeating human utterances, even grabbing chunks of older utterances to combine into new ones, but they show signs that they do not associate words with concepts.

        Here's an example. If we asked a cooperative human "what's the difference in behaviour between an orange and a potato?", what would the person say? Here are some options:

        • "...what???"
        • "what the hell do you mean by 'behaviour'?"
        • "well, if we're going to interpret 'behaviour' as [insert weird definition], then..."

        Why is that? Because humans associate that word with specific concepts, and they know that those concepts don't apply to non-agent entities like oranges and potatoes, except maybe metaphorically. They learned that word.

        Here's however what Google Bert said, when I asked the same question (originally in Portuguese, I'm translating it here, but feel free to redo it in any other language):

        Based on the above, which are the concepts that Bert associates with the words "behaviour", "roll", "slid", "active", and "passive"? None. It did not learn the meaning of those words - or any other; it doesn't associate concepts with words, it associates words with more words. That's what causes those "hallucinations" (IMO a really poor way to frame deeper issues as if they were just surface oddities.)

        And that's just an example. OP is another example of that, with ChatGPT - now with maths, instead of just language. Can we really claim that it learned maths if further data makes it "unlearn" it?

  • This has already been disproven, due to the fact the method the researchers used to test how well it was doing was flawed to begin with. Here is a pretty good twitter-thread showing why the methods they used were flawed: https://twitter.com/svpino/status/1682051132212781056

    TL:DR: They used an approach of only giving it prime numbers, and asking it if they were prime numbers. They didn't intersperse prime and non-prime numbers to really test it's capabilities at determining that. Turns out that if you do that, both the early and current versions of GPT4 are equally bad at determining prime numbers, with effectively no change noted between the versions.

  • I don't get it. I thought these models were "locked". Shouldn't the same input produce near-identical output? I know the algorithm has some fuzzing to help produce variation. But ultimately it shouldn't degrade, right?

    • The big pre-training is pretty much fixed. The fine tuning is continuously being tweaked, and as shown, can have dramatic effects on the results.

      The model itself just does what it does. It is, in effect, and ‘internet completer’. But if you don’t want it to just happily complete what it found on the internet (homophobia, racism, and all), you have to put extra layers in to avoid that. And those layers are somewhat hand-crafted, sometimes conflicting, and therefore unlikely to give everyone what they consider to be excellent results.

      • Ok but, regardless, they can just turn back the clock to when it performed better right? Use the parameters that were set two months ago? Or is it impossible to roll that back?

25 comments