Reasoning failures highlighted by Apple research on LLMs
Reasoning failures highlighted by Apple research on LLMs
A new paper from Apple's artificial intelligence scientists has found that engines based on large language models, such as those from Meta and OpenAI, still lack basic reasoning skills.
You're viewing a single thread.
I tried it myself (changing the name and changing the values) but lost interest after 3 attempts and always getting the right answer:
https://chatgpt.com/share/670af65d-da08-800f-8ad4-c67782ee5477
https://chatgpt.com/share/670af672-45dc-800f-ac91-cc2811fa89c7
https://chatgpt.com/share/6709e80b-e5a8-800f-90d0-1af3418675ef
8 0 ReplyErrors from your links like this :
Unable to load conversation 670a...6ed2c3 0 ReplySorry! I've updated my links now.
2 0 Reply"... So, Mary has 190 kiwifruit."
nice 😋🥝3 0 Reply
I wouldn't doubt that LLMs got some special input to deal with the specific examples of this paper, or similar enough.
4 1 ReplyThis is just improving LLMs, but with more steps.
1 0 Reply