They're already ignoring robots.txt, so I'm not sure why anyone would think they won't just ignore this too. All they have to do is get a new IP and change their useragent.
As someone who uses invidious daily I've always been of the belief if you don't want something scraped, then maybe don't upload it to a public web page/server.
Imagine a company that sells a lot of products online. Now imagine a scraping bot coming at peak sales hours and looking at each product list and page separately for said service. Now realise that some genuine users will have a worse buying experience because of that.