I often find myself explaining the same things in real life and online, so I recently started writing technical blog posts.
This one is about why it was a mistake to call 1024 bytes a kilobyte. It's about a 20min read so thank you very much in advance if you find the time to read it.
The underlying chips certainly are exact powers of two but the drive size you get as a consumer is practically never an exact power of two, that's why it doesn't really make sense to divide by 1024.
The size you provided would be 500107862016 / 1024 / 1024 / 1024 = 465.76174163818359375 GiB
Divided by 1000³ it would be 500.107862016 GB, so both numbers are not "pretty" and would've to be rounded. That's why there is no benefit in using 1024 for storage devices, even SSDs.
So you could say 17.179869184 GB or 16 GiB. Note that those 16 GiB are not rounded and the exact number of bytes for that RAM module. So for memory like caches, RAM, etc. it definitely makes sense to use binary prefixes with 1024 conversion but for storage devices it wouldn't make a difference because you'd have to round anyway.
There is a benefit in using 1000 because it's consistent with all the other 1000 conversions from kg to gramm, km to meter, etc. And you can do it in your head because we use a base 10 number system.
36826639 bytes are 36.826639 MB. But how many MiB? I don't know, I couldn't tell you without a calculator.
You don't have to know. It does not matter because your 8GB stick can't fit 16 512MB files anyway. Funny enough it might fit 500MB files if it is FAT32.
Being consistent with base10 systems does not matter in real world usage. Literally nobody cared before the asshats changed it.
Edit: i also understand si, down to its history. I don't live in an inch country. Computing is different then physical measurements. In computing 1024 is more "correct".