Some data formats are easy for humans to read but difficult for computers to efficiently parse. Others, like packed binary data, are dead simple for computers to parse but borderline impossible for a human to read.
XML bucks this trend and bravely proves that data formats do not have to be one or the other by somehow managing to be bad at both.
Don't drink the JSON coolaid. XML is fine. Better, in many cases, because XML files actually support comments.
In the modern programming world, XML is just JSON before JSON was cool. There was a whole hype about XML for a few years, which is why old programming tools are full of XML.
It's funny but sad to see the JSON ecosystem scramble to invent all of the features that XML already had. Even ActivityPub runs on "external entities but stored as general purpose strings", and don't get me started on the incompatible, incomplete standards for describing a JSON schema.
It's not just XML either, now there's cap'n proto and protobuf and bson which are all just ASN.1 but "cool".
It’s not a waste of time… it’s a waste of space. But it does allow you to “enforce” some schema. Which, very few people use that way and so, as a data store using JSON works better.
Or… we could go back to old school records where you store structs with certain defined lengths in a file.
You know what? XML isn’t looking so bad now.
If you want to break the AI ask instead what regex you should use to parse HTML.
XML has its strengths as a markdown format. My own formatted text format ETML is based on XML, as I could recycle old HTML conventions (still has stylesheet as an option), and I can store multiple text blocks in an XML file. It's not something my main choice of human readable format SDL excels at, which itself has its own issues (I'm writing my own extensions/refinements for it by the name XDL, with hexadecimal numbers, ISO dates, etc.).