Do you believe anything coming out of OpenAI when it's abundantly clear that they'll say anything to protect their bottom line.
OpenAI are not the only people harvesting data and selling it to interested parties.
There is no legal requirement to adhere to the standard and I'd be shocked if any court in the USA could understand the issue, let alone enforce a voluntary standard.
The amount of automated data collection online is staggering. On my own services it accounts for 50% of the hits. Good luck with policing that.
I used to sit and monitor my server access logs. You can tell by the access patterns. Many of the well-behaved bots announce themselves in their user agents, so you can see when they're on. I could see them crawl the main body of my website, but not go to a subdomain, which is clearly linked from the homepage but is disallowed from my robots.txt.
On the other hand, spammy bots that are trying to attack you will often instead have access patterns that try to probe your website for common configurations for common CMSes like WordPress. They don't tend to crawl.
Google also provides a tool to test robots.txt, for example.
It's not about relying on it, it's about changing the behaviour of web crawlers that respect 'em, which, as someone who has adminned a couple scarily popular sites over the years, is a surprisingly high percentage of them.
If someone wants to get around it, they obviously can, but this is true of basically all protective measures ever. Doesn't make them pointless.