So if the LLM doesn't feel like doing its research properly then questions won't be accurately answered? And the inbuilt bias of the AI will never be challenged because all of the references it chose to include will check out? Its blind spots becoming our blind spots? It's idea of a criminal becoming our idea of a criminal?