I have recently bought an external 4tb drive for backups and having an image of another 2tb drive (in case it fails). The drives are used for cold storage (backups). I would like a prefference on the filesystem i should format it. From the factory, it comes with ntfs and that is ok but i wonder if it will be better with something like ext4. Being readable directly from windows won't be necessary (although useful) since i could just temporarily turn on ssh on the linux machine (or a local vm) and start copying.
Edit: the reason for this post is also to address an issue i had while backing up to an ntfs drive on linux. I had filesystem corruptions (thankfully fixed by chkdsk on a windows machine) and I would like to avoid that in the future.
Edit2: ok I have decided I will go with ext4. Now I am making the image of the first 2tb drive. Wish me luck!
If you store compressed tarballs they won't be of any benefits.
If you copy whole directory as is, the filesystem-level compression and ability to deduplicate data (eg. with duperemove) are likely to save A LOT of storage (I'd bet on a 3 times reduction).
zfs is made for data integrity. I wouldn't use anything else for my backups. If a file is corrupted, it will tell you which file when it encounters a checksum error while reading the file.
If there is a redundant block then it will auto recover and just report what happened. Redundancy can be set up with multiple disks or by having a single disk write blocks to multiple places by setting the "copies" property to more than 1.
If your Linux distro is using btrfs you can format it to btrfs and use btrfs send for backups. Otherwise the filesystem shouldn't be to big if a deal unless you want to restore files from a Windows machine. If that is the case use ntfs
I use fedora 40 kinoite which uses btrfs but i am not sure i trust it enough for this data. Also forgot to mention in original post that I had some problems when overwriting files in ntfs which caused corruption. Thankfully chkdsk on a windows machine fixed that but I wouldn't like for that to happen again when backing up from a linux machine.
Depending on your skill level, you may want to consider a deduplicating file system, like BTRFS or ZFS. That way, you can make copies of the source drive and deduplicate unchanged segments, making every copy after the first only take up a small percentage of the apparant disk size.
I've personally used duperemove to deduplicate old disk images and it works very well in my experience.
I wouldn't use NTFS with Linux. The driver is stable enough that it doesn't corrupt the file system anymore these days, but performance isn't as good as alternatives.
I'd use ext4 for that, personally. You might also consider using full-disk encryption (redhat example) if there's going to be any data on there you wouldn't want a burglar to have. Obviously it wouldn't do much good if you don't encrypt the other disk as well, but having a fresh one to try it out on makes things easier.