I mean it makes sense. Machine learning is fantastic at noticing patterns, and the stuff they generate most definitely do have patterns. We might not notice them, but the models will pick up on them and eventually, if you keep training them on that data, they'll skew more and more in that direction.
They've been marketing things like there isn't a limit to how good these things can get, but there is. Nothing is infinite.
I've tried to make this point several times to folks in the industry. I work in AI, and yet every time I approach some people with "you know it ultimately just repeats patterns", I'm met with scoffs and those people telling me I'm just not "seeing the big picture".
But I am, and the truth is that there are limits. This tech is not the digital singularity the marketers and business goons want everyone to think it is.
It repeats things that sort of sound intelligent to try and convince everyone that actual intelligent thought is taking place? It really is just like humans!
They don't really parrot unless they're overfitted.
It's more that they have been trained to produce a certain kind of result. One method you can train them on is by basically assigning a score on how good the output is. Doing this manually takes a lot of time (Google has been doing this for years via captcha), or you could train other models to score text for you.
The obvious problem with the latter solution is that then you need to ensure that that model is scoring roughly in line with how humans would score it; the technical term for this is alignment. There's a pretty funny story about that with GPT-2, presented in a really cute animation format by Robert Miles.