IEEE 754
IEEE 754
![](https://cdn.fosstodon.org/media_attachments/files/113/731/563/185/873/212/original/23c614dd48e2a3ce.png?format=webp&thumbnail=128)
![](https://cdn.fosstodon.org/media_attachments/files/113/731/563/185/873/212/original/23c614dd48e2a3ce.png?format=webp)
cross-posted from: https://lemmy.ml/post/24332731
StolenCross-posted from here: https://fosstodon.org/@foo/113731569632505985
IEEE 754
cross-posted from: https://lemmy.ml/post/24332731
StolenCross-posted from here: https://fosstodon.org/@foo/113731569632505985
negative zero is real
Well, duh. All fractions are real.
LOL! Man I learned that in college and never used it ever again. I never came across any scenarios in my professional career as a software engineer where knowing this was useful at all outside of our labs/homework.
Anyone got any example where this knowledge became useful?
It’s useful to know that floats don’t have unlimited precision, and why adding a very large number with a very small number isn’t going to work well.
And this is why f64 exists!
If you’re doing any work with accounting, or writing test cases with floating point values.
No. I don't have to remember that.
I just have to remember the limits and how you can break the system. You don't have to think about the representation.
Accumulation of floating point precision errors is so common, we have an entire page about why unit tests need to account for it.
In game dev it’s pretty common. Lots of stuff is built on floating point and balancing quality and performance, so we can’t just switch to double when things start getting janky as we can’t afford the cost, so instead we actually have to think and work out the limits of 32 bit floats.
How long has this career been? What languages? And in what industries? Knowing how floats are represented at the bit level is important for all sorts of things including serialization and math (that isn't accounting).
More than a surface level understanding is not necessary. The level of detail in the meme is sufficient for 99,9% of jobs.
No, that's not all just accounting, it's pretty much everyone who isn't working on very low level libraries.
What in turn is important for all sorts of things is knowing how irrelevant most things are for most cases. Bit level is not important, if you're operating 20 layers above it, just as business logic details are not important if you're optimizing a general math library.
Since 2008.
I've worked as a build engineer, application developer, web developer, product support, DevOps, etc.
I only ever had to worry about the limits, but never how it works in the background.
https://h14s.p5r.org/2012/09/0x5f3759df.html comes to mind
To not be surprised when 0.1 + 0.1 != 0.2
Writing floating point emulation code?
I'd pretty much avoided learning about floating point until we decided to refactor the softfloat code in QEMU to support additional formats.
The very wide majority of IT professionals don't work on emulation or even system kernels. Most of us are doing simple applications, or working on supporting these applications, or their deployment and maintenance.
finally a clever user of the meme
It's not mimicry.