Hi all. I was curious about some of the pros and cons of using Proxmox in a home lab set up. It seems like in most home lab setups it’s overkill. But I feel like there may be something I’m missing. Let’s say I run my home lab on two or three different SBCs. Main server is an x86 i5 machine with 16gigs memory and the others are arm devices with 8 gigs memory. Ample space on all. Wouldn’t Proxmox be overkill here and eat up more system resources than just running base Ubuntu, Debian or other server distro on them all and either running the services needed from binary or docker? Seems like the extra memory needed to run the Proxmox software and then the containers would just kill available memory or CPU availability. Am I wrong in thinking that Proxmox is better suited for when you have a machine with 32gigs or more of memory and some sort of base line powerful cpu?
I used Proxmox for a couple years and it's good if you run a lot of VMs or LXCs, but I found that I'm not really the target audience. I ended up only running one Debian VM for my Docker containers. It was fine, but I eventually felt that Proxmox added no value for me, and the end result was sacrificing some memory and performance from using virtio emulations for CPU/GPU/RAM/filesystems. If your machines only have 8-16GB of RAM I don't think it would be a good idea, as I've seen the rule of thumb is to dedicate 2GB for Proxmox's usage, which is in addition to any guest OS's requirements. Meanwhile I have a Debian install on a VPS that takes about 450MB of RAM.
For me, pros:
Native ZFS support - invaluable, ZFS is terrific. MergerFS+SnapRAID is a decent replacement but the dodgy tooling and laundry list of footguns makes me nervous to use it on important data. ZFS is idiot-proof, as long as you know what you're doing during the initial setup. RAIDZ expansion is coming this year and you can still use mixed-size disks in a RAIDZ as long as you accept that all disks are equivalent to the smallest one, so I personally feel ZFS is acceptable for grab-bag disk usage now
Separation of bare metal and server environment, which means you can spin up another server VM from scratch without impacting the previous one, then switch with zero downtime. In the end, I replaced Proxmox with Debian on ZFS root (ZFSBootMenu) and wrote a few hundred lines of bash to automate the installation, so when I switched it only took about 30 minutes of downtime start to finish.
Isolation of different environments. If my VM gets hacked, it will have a harder time reaching my Proxmox host etc. I run all services in isolated Docker environments anyway so this isn't that big of a perk for my threat profile.
Cons:
Partitioning RAM for ZFS ARC, Proxmox, and VM leads to inherent inefficiencies at the margins.
I usually give my VM n-1 CPU cores, which is still less power than if I had just used the CPU natively.
GPU passthroughs to VM can be less efficient, depending on the GPU and how it handles it. My iGPU is less performant when using its ~SR-IOV feature
Learning requirement - not a huge learning curve but it's a lot of knowledge that I will not use now that I've stopped using Proxmox
Hosting your data pool on the Proxmox host or a dedicated data VM means that your server VM needs to use NFS to access its data, which lacks a handful of features (e.g. inotify) and is a pain
Need to maintain two systems for updates, downtimes, etc
More points of failure
Extra startup time
Run by a company that thinks it's okay to use winrar-style nag popups every time you load the console, and requires you to manually dig through the source to disable that. I understand it's their business model, it doesn't change how it affects me the end user who lacks $120/year to spend on disabling a popup
I went exactly the same route. Years of proxmox realizing it is not KISS in any way for my use cases. Switches to Nixos on ZFS root (so no bash installation scripts ;) ).
However, docker has not the same level of isolation and security as VMs. I am currently looking into gVisor for that.
I remember evaluating the price a long time ago and thinking it was too much for disabling a pop-up, and on writing my post I navigated to their site and saw the standard subscription and thought that's what I had looked at a few years back: https://shop.proxmox.com/index.php?rp=/store/proxmox-ve-standard
Ah yes I did the same thing as you as I checked again but from my PC.
I just paid myself the 110 after using it for 2 years. It's not for the popup since I was using a script to remove it. It's more to get the production ready updates. My server has too many important things now, I don't want less tested changes. That and to support the devs.