LLMs can’t reason — they just crib reasoning-like steps from their training data
Researchers at Apple have come out with a new paper showing that large language models can’t reason — they’re just pattern-matching machines. [arXiv, PDF] This shouldn’t be news to anyone here. We …
You're viewing a single thread.
Arxiv paper link referenced in the article: https://arxiv.org/pdf/2410.05229