This why you check your backups periodically and replace the bad ones with good copies. If you're asking how you know what's good and bad - traditionally and fundamentally, even if many people here dismiss it, the storage already has checksums, that sneaky bitrot when the storage will give you slightly altered data (instead of saying "Error") are so small that most people would never encounter this. Now of course serious data hoarders would use checksumming file systems, will do extra checksums for any archived data, also all archiving formats or backup formats have their own checksums too, if one would use that instead of dropping the files in the regular file system.
Just click on the link and then look at your drive, you should be able to spot the difference immediately. Does it look as the drive above (helium) or below (air with the characteristic venting hole and everything)?
Let me guess, it's an air drive (you can tell easily as they look totally different , these are regular Red, well called now "Plus" even if there's nothing different, but you get the idea, the air is the one with a hole).
How is the disk formatted? If it's one of Apple's file systems, yes won't be seen in Windows (the disk itself will be visible but no "D:" or anything).
I answered in of your many posts (removed and understanbly, you still have 9 of them about this issue, who knows how many removed ones?!!?!) about the first program found by Google on github that would convert these images. What does that do?
You won't be able to blindly modify a corrupted block so it doesn't blow up some decompression algorithm in a program where you have only the binary for the program. But with an open source program, first of all it might not crash, and even if it does you'd be able to see the line and edit it to not crash or give up when something isn't the expected value.
There is no complete solution, not by a long shot.
You could save all files you can grab from everywhere, buy Google space and enable absolutely all possible backups, use the specific data transfer (to a new phone!) tool from the manufacturer (no matter if Google or Samsung or anyone else) and you'd STILL have tons of apps with tons of data you need to move with a specific workflow (best example Whatsapp, but this is a GOOD example, in the sense that people really want their data and have a process to move it). Repeat for anything similar, login (separately, sometimes via a rather complex process) to everything else and STILL you'll be having tons of things that weren't carried over, you need to configure by hand and so on. You'll be fighting with this for weeks, if not months if you have more than 2-3 apps.
Nextcloud server has 907 contributors, and 2672 Open issues and 37970 Closed. That is without the desktop client, the mobile apps (multiple), and everything around (viewers, editors, voice/video/IM app, and so on. It's packaged in 101 different ways, enough of them being all kinds of one command/click/etc. install, and you can even buy it directly managed and hosted and everything from multiple providers, including (but not limited) from NextCloud themselves. I think it's extremely hard to compete in any way, even if you have something much simpler.
It depends which program you're using, for example even tar will preserve hard links. Or on the other side something like duplicacy will just deduplicate everything that has the same content.
exFAT will work just fine for pictures and videos, it's stuff like databases or maildir (saving the emails one email=one file) or other similar workloads that are the problem. Also it'll disable write cache usually and you could just yank the drive.
I'd keep all "working drives", the ones that are used mostly as an internal drive even if external (a block device is a block device despite some people here needing a fainting couch when they read the letters "USB"). Everything for backups, to be used between multiple persons or computers I'd do exFAT. Mostly because it's MISSING the following (from annoying to dangerous to disastrous): permissions, junctions (links) and EFS.
Are you using a Thunderbolt cable? Most random USB-C cables you get with phones (or even basic external SSDs/enclosures) don't have the required connections.
It's SMR, it'll be slow as hell and rearrange itself sometimes for days if you don't let it sleep. But there is no choice for 2.5" drives, at least if you want a somehow large size (and even for many worthless smaller sizes nowadays).
That would work. Actually if it's a constellation that supports TRIM (OS-Filesystem-whatever it sees on the USB - see this to get an idea how complex things can get) reading the saved backup might be equivalent to reading the whole SSD. Even if you used only 64 GBs of 1TB if the rest is TRIMed nothing (more) would be "really" read even if you do a full badblocks (or dd to /dev/null or any other full read test). Sure, it'll take a while to feed 900+ GBs of zeroes (or whatever the TRIMed sectors return) over USB but not much will be really read from the SSD.
Give it a full read, which you should anyway to check your backups?
The point is you could run out of space anywhere, and if you're suggesting 50GBs is a lot and you'll never run out of space I think you're preaching to the totally wrong choir.
That sounds like an application issue, you can run out of space locally as well (especially with the Macs that are shockingly stingy with the storage in the basic configurations and cost tons to buy the devices with more, and the SSDs were soldered even when they were more like PCs than phones). Backups wouldn't help to recover some work that was never saved.
These drives have been on the market since early 2021, sure they're still kind of new but with 5 years warranty (and they decided since recently to actually offer warranty to consumers in Europe too) shouldn't be that much of a risk.
It's probably the cheapest 20TB Seagate, with what it could be replaced, with something more expensive (not that there would be much difference)?
He he, one of the drives that started SMRgate. At least it should have 3 years of warranty, but you might have trouble to get it in your region.
NAS drives are the "shut up about your disk being slow, your gigabit is even slower" category (back when they were introduced most NASes couldn't even fill up the gigabit, if they had it at all). That is if anyone asks how they're different from the "DAS" and "Server" category. That somehow the marketing was so successful that now they're considered superior to the others is another story.
As usual the answer is of course rclone. It even has --drive-skip-gdocs
switch. Just rclone move --drive-skip-gdocs gdrive_remote:whatever_directory_if_any_at_all some_local_place