Skip Navigation

Share specific examples of software built from source works better for you than packages or pre-built binaries

I looked up specifically examples of this and didn't find answers, they're buried in general discussions about why compiling may be better than pre-built. The reasons I found were control of flags and features, and optimizations for specific chips (like Intel AVX or ARM Neon), but to what degree do those apply today?

The only software I can tell benefits greatly from building from source, is ffmpeg since there are many non-free encoders decoders and upscalers that can be bundled, and performance varies a lot between devices due to which of them is supported by the CPU or GPU. For instance, Nvidia hardware encoders typically produce higher quality video for similar file sizes than ones from Intel AMD or Apple. Software encoders like x265 has optimizations for AVX and NEON (SIMD extensions for CPUs).

22 comments
  • For me the biggest benefit is the ease of applying patches. For example in Nix I can easily take a patch that is either unreleased, or that I wrote myself, and apply it to my systems immediately. I don't need to wait for it to be released upstream then packaged in my distro. This allows me to fix problems and get new features quickly without needing to mess with my system in any other way (no packages in other directories that need to be cleaned up, no extra steps after updates to remember, no cases where some packages are using different versions and no breaking due to library ABI breaks).

    Another benefit that you are pointing at is changing build flags. Often times I want to enable an optional feature that my distro doesn't enable by default.

    Lastly building packages with different micro-architecture optimizations can be beneficial. I don't do this often but occasionally if I want to run some compute-heavy work it can be nice to get a small performance boost.

    • I build software that I changed or patched
    • When the bat version in the repos was broken I just installed it with cargo which compiles the latest version
    • You can get a compiled version with a '-git' package from the AUR if you need the latest features not yet in a stable release
    • Some pieces of software I use I made myself so they are compiled by me
    • Maybe you want to install some software that is not available precompiled
    • The XZ backdoor didn't work if you compiled it yourself: https://www.openwall.com/lists/oss-security/2024/03/29/4
  • didn’t find answers [:] they’re buried in general discussions about why compiling may be better than pre-built. The reasons I found were control of flags and features, and optimizations for specific chips (like Intel AVX or ARM Neon), but to what degree do those apply today?

    You won't build and install directly from source in any proper enterprise environment, simply because validation breaks and (provably) consistency goes with it; and that takes out reliability.

    Even accounting for the gains when you're tuning stuff, or even when it's a home build, or even when it's a kernel build and you're removing or adding drivers or tunable defaults, ultimately you will be building a package as a portable artefact to be submitted for testing or pulled out of backups for easy re-install. Especially when kernel builds take a long time, and even when you're using makefiles for much of it, you're STILL going to be building a package, only so you have the process encoded and repeatable and so you don't have to re-make if it all works (more an issue when building a kernel package took 25 hours, but you get the idea).

    So. In short, if someone's telling you to compile into production from source, it's still a security risk and it's also inefficient past the N=1 stage. Irresponsible for TWO reasons, then.

    Edit. I coordinated with Support while I was doing Security work in ~2005. You wanna know how to piss off your support worker and fast-track a ticket to 'no repro' death? "I compiled it on the machine from source ..." and that goes for paid support or gitlab project volunteer support.

  • The performance boost provided by compiling for your specific CPU is real but not terribly large (<10% in the last tests I saw some years ago). Probably not noticeable on common arches unless you're running CPU-intensive software frequently.

    Feature selection has some knockon effects. Tailored features mean that you don't have to install masses of libraries for features you don't want, which come with their own bugs and security issues. The number of vulnerabilities added and the amount of disk space chewed up usually isn't large for any one library, but once you're talking about a hundred or more, it does add up.

    Occasionally, feature selection prevents mutually contradictory features from fighting each other—for instance, a custom kernel that doesn't include the nouveau drivers isn't going to have nouveau fighting the proprietary nvidia drivers for command of the system's video card, as happened to an acquaintance of mine who was running Ubuntu (I ended up coaching her through blacklisting nouveau). These cases are very rare, however.

    Disabling features may allow software to run on rare or very new architectures where some libraries aren't available, or aren't available yet. This is more interesting for up-and-coming arches like riscv than dying ones like mips, but it holds for both.

    One specific pro-compile case I can think of that involves neither features nor optimization is that of aseprite, a pixel graphics program. The last time I checked, it had a rather strange licensing setup that made compiling it yourself the best choice for obtaining it legally.

    (Gentoo user, so I build everything myself. Except rust. That one isn't worth the effort.)

22 comments