Thing is, chatgpt can easily answer this question correctly. So it's not an LLM issue, it's just that google has managed to combine google's horrible results with an LLM to give us the worst of both worlds.
I'd bet it's wholly dependent on their shitty results. They're basically passing it a prompt like "parse these 10 $cached_webpage_results[] to answer this $question" and since your prompts tend to heavily prime your answers it's gonna pull from the shitty included search results rather than its own training.