Hallucinations (confabulations) in large language model systems can be tackled by measuring uncertainty about the meanings of generated responses rather than the text itself to improve question-answering accuracy.
If you could determine truth using an algorithm that "measures uncertainty" we could just wrap-up every unanswered question of life, the universe and everything.
There is no solution to the "hallucination" problem that doesn't involve human-level logical reasoning from a repository of established facts. LLMs do not have this.
LLMs were designed to generate coherent statements, but not necessarily correct ones, and are unable to consistently spot logical fallacies in their output. Humans can do this (some better than others), so computers should be capable of this too. The technology is not there yet, but I'm glad people are working on it.