Hi there,
I've recently tried to use the Usenet and I am amazed how much stuff is on there and at which speeds it can be accessed. Yet... Readarr has been giving me a headache recently and I think this is due to some peculiarity of the Usenet.
It recently started downloading sources to many files with wild naming schemes at the end of the file like
(2019).zip.vol31+32.par2 yEnc
just to complain that it didn't find any files in the download. Now I get that yEnc is some sort of cypher-format and since the files are usually under 10mb, I get that these are probably single chapter or something. Searching the Usenet by hand, I'll usually find many parts of the same audio book with those numbers slapped onto them. Some don't even follow consecutive numbering and contain vol3+79 or something.
So:
How am I supposed to download those and how am I supposed to teach Readarr how to handle them?
yEnc isn't a cipher, but rather an encoding for mapping binary to text, similar to base64 (but much more effective). So this denotes yEncc encoding.
The files you're seeing are PAR2 files, which are used for repairing. They're useless without the base file. The file in your example contains 32 recovery blocks. That means if your base file has 32 or less damaged blocks, this parity file can repair it.
Usually, you'd download all files belonging together in a single download and let your downloader do the rest. This is normally done by loading an NZB file that you either get from a Usenet search engine or an indexer.
You can make a custom format, which filters the stuff you don't want and add that to your quality format with a negative score. Radarr won't pick them up then anymore. If you don't know how to do that, I could explain it.
Hey, wwhile I indeed didn't even know one could do a custom format with a negative score, I solved the issue by adding the corresponding releases those par2 files were for manually.