When I'm writing webscrapers I mostly just pivot between selenium (because the website is too "fancy" and definitely needs a browser) and pure requests calls (both in conjunction with bs4).
But when reading about scrapers, scrapy is often the first mentioned Python package. What am I missing out on if I'm not using it?
The huge feature of scrapy is it's pipelining system: you scrape a page, pass it to the filtering part, then to the deduplication part, then to the DB and so on
Hugely useful when you're scraping and extraction data, I reckon if you're only extracting raw pages then it's less useful I guess