Some of those DALL-E renders looked pretty different from the original.
You have to always keep in mind that the result you see is heavily biased by the dataset that was used to train that AI, so the result you see may actually look nothing like the real person.
I see several people trying to generate images on historical pictures/wikipedia articles and while it is technically interesting, I am thinking, what is the point? These pictures are informative: they give you hints about how these people were depicted/dressed at the time. "Making it look stadnard-good" kinda defeats the purpose...
I built this for myself because I kept putting in 18th century etchings or ruined frescoes into chat GPT to see what they looked like in real life. I've always been interested to see marble statues in flesh too.
My problem is that reconstructions by architects are produced differently, using more information, than Hollywood-like DALL-E outputs. The first one is made to be informative, about the size and aspect of the buildings, the urban organization, etc. The second one is just designed to look good.