As I am quite new to all this, maybe a very noob question. I prompted bing, bard and chatgpt (3.5) with the same question. Bing just straight up answered different questions but delivered sources I could check. Bard and chatgpt answered my questions but just invented (all) sources. Just made up randomised authors and title names. Bard delivered links to said scientific articles, but when you followed the link the articles in question were completely different.
How can you I trust delivered results, when the sources are made up?
And also: why? Why didn't it say for example there are no meta-analyses?
LLM's like the AI's you mentioned generally are just really good at predicting the next word. For example, Given an input like "My dog likes" an AI may add the word "treats" to the end. They are so good at predicting the next word that they will write paragraphs that sound entirely human.
So, when they give you strange links, and made up names it's probably because it just thought that stuff sounded good.
You can only trust results if you verify them yourself.
The thing is, people tend to take the results given as a facts. And if they have no means to check the sources (or dont bother or care) this whole ai might become a really disinformation circus.