Skip Navigation
Data Hoarder @selfhosted.forum stayjuicecom @alien.top
BOT

Could someone please give me a walk through on how to crawl an entire web domain and scrape the images only?

I've got ZorinOS/ubuntu. I've tried httrack, but it gets slimjet launch terminal errors. I've tried getting chatgpt to write python scripts for me. I've tried WFDownloaderApp, but it's GUI glitches horribly.
I've tried "DownloadThemAll!" but its just a browser extension, and it will only download a single webpage & i see no way to enable crawling or filters.

Please help, thanks.

3
3 comments