Cloud service provider Cloudflare has launched a new tool that attempts to automatically detect and block AI bots, crawlers and scrapers.
Cloudflare, the publicly traded cloud service provider, has launched a new, free tool to prevent bots from scraping websites hosted on its platform for data to train AI models.
Some AI vendors, including Google, OpenAI and Apple, allow website owners to block the bots they use for data scraping and model training by amending their site’s robots.txt, the text file that tells bots which pages they can access on a website. But, as Cloudflare points out in a post announcing its bot-combating tool, not all AI scrapers respect this.
I know we hate Cloudflare, but that's a good feature addition.
Went to turn it on on the domain covering some of my stuff, and they also directed me to their Radar site, which shows the volume of and which bots are making the most noise, and not the least bit shockingly, it's AI bots all the way down.
If nothing breaks I'm totally leaving this on and Amazon, Google, and OpenAI can all go screw themselves.
Can you educate me on the negatives of Cloudflare?
My company is on Akamai, who has a pretty solid combined offering of WAF, DNS, and CDN, and yet I still feel like their platform is antiquated and well overdue for a refresh.
Thinking back to log4j, it was cloudflare who had the automatic protections in place well ahead of Akamai, who we had to ask for custom filters. Cloudflare also puts out many articles on Internet events and increase adoption of emerging best practices, sometimes through heavy shaming.
Cloudflare's free CDN offering is a MiTM (you use their certificates ONLY to be able to go through their network). Adding to this, they control a lot of Internet infrastructure (comparable to Microsoft and Google). I hate all of these companies and specifically use Quad9 till I get my own DNS running. It probably doesn't matter to the end-user but I'm happy to see a technical crowd who maintains my ideals on big tech on Lemmy
I'm not opposed to them, but a lot of people on Lemmy have pretty strong opinions, primarily around the centralization around, and potential of MITMing data.
I don't think they're wrong, because the centralization has given Cloudflare a shocking amount of power over who sees what and how: they, for example, will put you in captcha hell if you're using certain browsers, connecting from certain networks, or using TOR. I don't ever run into those issues, but they're certainly ones that happen to people often enough that a quick search will find story after story of people that run into this mess, and that it's sometimes annoying and painful to dig out of when and if it happens.
And, due to how their service works and the way the certificates are handled, they are essentially MiTMing your traffic. The certificate chain between your client and cloudflare and cloudflare and your server, depending on how exactly you've configured it, can be done in such a way that there's a re-encryption happening with them in the middle, and thus, Cloudflare can see all your data.
I've met their CEO and VP of Safety and worked extensively with them in a previous job and don't actually believe they're doing anything untowards, but the fact is that they, if they so desired, absolutely could.
I use their stuff on anything I setup for public access, either via an argo tunnel or their more traditional CDN stuff, but I can understand why other people concerned about user blocking and privacy (which are less of a venn diagram of users impacted, and more of a single circle: the privacy people are usually using browsers, addons, and VPN connections that are directly the cause of the block) wouldn't be Cloudflare fans.
Now taking bets on how long it will be before Cloudflare announces that they're selling AI training datasets based off of the content they're managing...
Would be rather short-sided of them. They rely on the free tier of their services for upscaling and word of mouth. People are already wary of the fact CF can snoop on what's supposed to be private connections, but so far they've used that only for good.
I’m not much of a programmer and I don’t host any public sites, but how feasible would it be to build an equivalent of Night Shade but for LLMs that site operators could run?
I’m thinking strategies akin to embedding loads of unrendered links to pages full of junk text. Possibly have the junk text generated by LLMs and worsened via creative scripting.
It would certainly cost more bandwidth but might also reveal more bad actors. Are modern scrapers sophisticated enough to not be fooled into pulling in that sort of junk data? Are there any existing projects doing this sort of thing?
To get more direct to the point you could use those unrendered dummy links to ban whatever IPs click them.
With the vast amounts of training data and how curated they're becoming (Llama and Claude are going that direction) it's infeasible to actually poison a large model to this degree.