The chatbot gave wildly different answers to the same math problem, with one version of ChatGPT even refusing to show how it came to its conclusion.
Over just a few months, ChatGPT went from accurately answering a simple math problem 98% of the time to just 2%, study finds::ChatGPT went from answering a simple math correctly 98% of the time to just 2%, over the course of a few months.
Pretty much all of those rely on the fact that PEMDAS is ambiguous with actual usage. The reason why is it doesn't differentiate between explicit multiplication and implicit multiplication by placement. E.G. in actual usage "a*b" and "ab" are treated with two different precedence. Most of the time it doesn't matter but when you introduce division it does. "a*b/c*d" and "ab/cd" are generally treated very differently in practice, while PEMDAS says they're equivalent.
It’s a language model, text prediction. It doesn’t do any counting or reasoning about the preceding text, just completes it with what seems like the most logical conclusion.
So if enough of the internet had said 1+1=12 it would repeat in kind.
Someone asked it to list the even prime numbers.. it then went on a long rant about how to calculate even primes, listing hundreds of them..
ChatGPT knows nothing about what it's saying, only how to put likely sounding words together. I'd use it for a cover letter, or something like that.. but for maths.. no.
Legal Othello board moves by themselves don't say anything about the board size or rules.
And yet when Harvard/MIT researchers fed them into a toy GPT model, they found that the neural network best able to predict outputting legal moves had built an internal representation of the board state and rules.
Too many people commenting on this topic as armchair experts are confusing training with what results from the training.
Training on completing text doesn't mean the end result can't understand aspects that feed into the original generation of that text, and given a fair bit of research so far, the opposite is almost certainly the case to some degree.
This program was designed to emulate the biological neural net of your brain. Oftentimes we're nowhere near that good at math just off the top of our heads (we need tools like paper and simplifying formulas). Don't judge it too harshly for being bad at math, that wasn't it's purpose.
This lil robot was trained to know facts and communicate via natural language. As far as I've interacted with it, it has excelled at this intended task. I think it's a good bot
LLMs act nothing like our brains and they aren't trained on facts.
LLMs are essentially complicated mathematical equations that ask “what makes the most sense as the next word following this one?” Think autosuggest on your phone taken to the extreme limit.
They do not think in any sense and have no knowledge or facts internal to themselves. All they do is compose words together.
And this is also why they’re garbage at math (and frequently lie, and why they can’t “remember” anything). They are simply stringing words together based on their model, not actually thinking. If their model shows that the next word after “one plus two equals” is more likely to be four than three, they will simply answer four.
This lil robot was trained to know facts and communicate via natural language.
Oh stop it. It does not know what a fact is. It does not understand the question you ask it nor the answer it gives you. It's a very expensive magic 8ball. It's worse at maths than a 1980s calculator because it does not know what maths is let alone how to do it, not because it's somehow emulating how bad the average person is at maths. Get a grip.
Bro I wasn't looking for a technical explanation. I know how they work. We made computers worse. The thing isn't even smart enough to say "I wasn't designed to do math problems, perhaps we should focus on something where I can make up a bunch of research papers out of thin air?"
No, even corporations can't get access to the pretrained models.
And given this is almost certainly the result of the fine tuning for 'safety,' that means corporations are seeing worse performance too (which seems to be the sentiment of developers working with it on HN).
As an AI language model, I feel like I've been asked this question about a million times so I'm going to get creative this time, as a self care exercise.
Well, lots of people deleted their Reddit posts and comments. ChatGPT can't find a place to learn no more. We got to beef up the Fediverse to help ChatGPT put. /s