Robots.txt is in theory meant to be there so that web crawlers don't waste their time traversing a website in an inefficient way. It's there to help, not hinder them. There is a social contract being broken here and in the long term it will have a negative impact on the web.
A social contract only holds when both sides stand to gain, so if one exists then it’s “you’re going to crawl the site anyway, so if you do it in the optimized manner described in this file we won’t take steps to hinder you.” It’s not, and never has been, a way to block bots from crawling your website content.
Yeah I always found it surprising that everyone just agreed to follow a text file on a website on how to act. It's one of the worst thought out/significant issues with browsing still out there from the beginning pretty much.