Apple study exposes deep cracks in LLMs’ “reasoning” capabilities
Apple study exposes deep cracks in LLMs’ “reasoning” capabilities
arstechnica.com Apple study exposes deep cracks in LLMs’ “reasoning” capabilities
Irrelevant red herrings lead to “catastrophic” failure of logical inference.
You're viewing a single thread.
View all comments
109
comments
I feel like a draft landed on Tim's desk a few weeks ago, explains why they suddenly pulled back on OpenAI funding.
People on the removed superfund birdsite are already saying Apple is missing out on the next revolution.
27 0 Reply"Superfund birdsite" I am shamelessly going to steal from you
16 0 Replyplease, be my guest.
6 0 Reply
You've viewed 109 comments.
Scroll to top