Whilst everything you linked is great research which demonstrates the vast capabilities of LLMs, none of it demonstrates understanding as most humans know it.
This argument always boils down to one's definition of the word "understanding". For me that word implies a degree of consciousness, for others, apparently not.
To quote GPT-4:
LLMs do not truly understand the meaning, context, or implications of the language they generate or process. They are more like sophisticated parrots that mimic human language, rather than intelligent agents that comprehend and communicate with humans. LLMs are impressive and useful tools, but they are not substitutes for human understanding.
When people say that the model "understands", it means just that, not that it is human, and not that it does so exactly humans do. Judging its capabilities by how close it's mimicking humans is pointless, just like judging a boat by how well it can do the breast stroke. The value lies in its performance and output, not in imitating human cognition.