I explicitly have my robots.txt set to block out AI crawlers, but I don't know if anyone else will observe the protocol. They should have tools I can submit a sitemap.xml against to know if i've been parsed. Until they bother to address this, I can only assume their intent is hostile and if anyone is serious about building a honeypot and exposing the tooling for us to deploy at large, my options are limited.
The funny (in an "wtf" not "haha" sense) thing is, individuals such as security researchers have been charged under digital trespassing laws for stuff like accessing publicly available ststems and changing a number in the URL in order to get access to data that normally wouldn't, even after doing responsible disclosure.
Meanwhile, companies completely ignore the standard mentions to say "you are not allowed to scape this data" and then use OUR content/data to build up THEIR datasets, including AI etc.
That's not a "violation of a social contract" in my book, that's violating the terms of service for the site and essentially infringement on copyright etc.
Corporations are people except when it comes to liability. Compare the consequences of stealing several thousand dollars from someone by fraud vs. stealing several thousand dollars from someone by fraud as an LLC.
Just thought of a nasty hack the browser makers (or hackers) could use to scrape unlisted sites - by surreptitiously logging user browser history for a crawl list
individuals such as security researchers have been charged under digital trespassing laws for stuff like accessing publicly available ststems and changing a number in the URL in order to get access to data that normally wouldn’t
I've scraped a couple government sites and it seems like they mostly don't care, except for sometimes intentionally changing their API to be harder(or easier, thanks California!) to scrape