Skip Navigation

How will I know how many services I can run on my self hosted server?

Hi y'all. I've got an Intel Nuc 10 here. I want to run a few apps on it, like BitWarden, PiHole, NextCloud, Wireguard, and maybe more, just for my own use, inside my home.

Is there a way to guage whether the hardware is up to the task in advance? Like, if love to be able to plan this by saying, "this container will use x MB of ram and 5% of the cpu" and so on?

I want to run everything on this one PC since that's all I have right now.

EDITED TO ADD: T****hank you all! Great info. :thumbsup

23 comments
  • ram is really the limiting factor for most servers

    if you're gonna have less than 5 users on the services they're probably not all going to be used at the same time so cpu usage will depend on which are being hit at the moment

    none of the services you've listed are particularly heavy so you'll be good for those and a bunch more no problem

  • I should add more ram soon because Im running 30 services on 8GB atm and looks like Im about to hit the wall. Services I run atm are pihole, nextcloud, wireguard server, arr stack, jellyfin, homeassistant and more.

  • This is tangential to your question, but I've been playing with Kubernetes and its ability to ration resources like CPU and RAM. I'm guessing that Docker has a similar facility. Doing this, I hope, will allow me to have Plex transcode videos in the background without affecting the responsiveness of a web app I'm using or will kill and restart that one app I wrote that has a memory leak that I can't find.

  • BitWarden+PiHole+NextCloud+Wireguard combined will add to like maybe 100MB of RAM or so.

    Where it gets tricky, especially with something like NextCloud, is the performance you see from NextCloud will depend tremendously on what kind of hard drives you have and how much of it can be cached by the OS. If you have 4GB of RAM, then like 3.5GB-ish of that can be used as cache for NextCloud (and whatever else you have that uses considerable storage). If you have tiny NextCloud storage (like 3.5GB or less), then your OS can keep the entire storage in cache, and you'll see lightning-fast performance. If you have larger storage (and are actually accessing a lot of different files), then NextCloud will actually have to touch disk, and if you're using a mechanical (spinning rust) hard drive, you will definitely see the 1-second lag here and there for when that happens.

    And then if you have something like Immich on top of that....

    And then if you have transmission on top of that....

    Anything that is using considerable filesystem space will be fighting over your OS's filesystem cache. So it's impossible to say how much RAM would be enough. 512MB could be more than enough. 1TB could be not enough. It depends on how you're using it and how tolerant you are of cache misses.

    Mostly you won't have to think about CPU. Most things (like NextCloud) would be using like <0.1% CPU. But there are some exceptions.

    Notably, Wireguard (or anything that requires encryption, like an HTTPS server) will have CPU usage that depends on your throughput. Wireguard, in particular, has historically been a heavy CPU user once you get up to like 1Gbit/s. I don't have any recent benchmarks, but if you're expecting to use Wireguard beyond 1Gbit/s, you may need to look at your CPU.

  • It's very hard to say anything definitive, because many of those can generate different load depending on how much traffic/activity it gets (and how it correlates with other service usage at the same time). Could be from minimal load (all services for personal use, so single user, low traffic) to very busy system (family and friends instance, high traffic) and hardware requirement estimates would change accordingly.

    As you already have a machine - just put them all there and monitor resource utilization. If it fits - it fits, if it doesn't - you'll need to replace (if you're CPU-bound, I believe CPUs are not upgradeable on those?) or upgrade (if you're RAM-bound) your NUC. You won't have to reinstall them twice anyway.

    • This is the only real answer - it is not possible to do proper capacity planning without trying the same workload on similar hardware [1].

      Some projects give an estimation of resource usage depending on a number of factors (simultaneous/total users...) but most don't, and even the estimations may be far from actual usage during peak load, with many concurrent services, etc.

      The only real answer is close monitoring of resource usage and response times (possibly with alerting), and start adding resources or cutting down on resource-hungry features/programs if resource usage goes over a certain threshold (~80% is when you should start paying attention) and/or performance starts to degrade.

      My general advice is to max out installed RAM from the start, virtualize your hosts (which make it easier to add/remove resources or migrate a hungry VM on more powerful hardware later), and watch out for disk I/O on certain workloads (databases... having db engines running off SSDs helps greatly).

23 comments