Hi! I'm trying to archive papers as soon as they appear in a scientific journal, and I've attempted to search for PDF links on each page using some regular web scraping.
The problem is that most of these journals will add their fancy PDF readers, and downloading the file is not as straight-forward as it seems. However, the Zotero Connector works flawlessly when you trigger the extension. Therefore, I attempted to set up a selenium instance with this extension to download the papers given a link, but I struggle to actually get the extension to trigger. I tried sending a Shift + Ctrl + S command, but that doesn't seem to get picked up. Similarly, I can't figure out how to call the extension from the console.
Did anyone else attempt such a workflow before? Am I doing something completely unnecessary, as there are better options available? Help a fellow sailor out. Thanks a lot in advance for your help!
It's been a while (2-3 years) since I looked into this but last time I did, the answer was a big old NO. I found all kinds of discussions of people thinking over this idea without success.
It must be possible. All the pieces are there, and they are Libre. They must simply be put put together. Yet many have asked after it without success. I do not know why it proves so difficult.
This sounds smart and helpful but may I ask how much work will be put into this workflow and how much time will it actually save you? If I can’t be bothered to search for the new paper typically it isn’t worth my time.
If you find the answer I’d also love to know! Zotero is awesome