AI, trained almost exclusively on gigantic databases of human authored text such as reddit and quora with little regard or filtering for quality of said text given the scope of data to process, mirrors biases within humanity? Shocking
If you forced then to say the names of the doctors i'm pretty sure "Dr. Patel" would be overrepresented (or represented just as much as in the training data).
LLMs are not replacements for human beings, if you don't want the most bland output, you either train your own with a dataset to fit your needs, or specifically tell the LLMs in the prompt (which sometimes they'll just ignore) or pay a human to draw it.