So as I look to build my first dedicated media server, I’m curious about what OS options I have which will check all the boxes. I’m interested in Unraid, and if there’s a Linux distro that works especially well I’d be willing to check that out as well. I just want to make sure that whatever I pick, I can use qbittorrent, Proton, and get the Arr suite working
Are there any resources available for how to do this? I feel like I more or less understand how Docker works conceptually, but every time I try to actually use it, I feel in over my head very quickly
look for docker-compose + whatyouwant specifically, it's way more straightforward. once you have one set up, it get easier adding on different software.
I just recently discovered proxmox and am slowly moving my docker containers off my NAS. Picked up a used Intel NUC, i5-8259, 32gb ram, 512gb HDD. It's been great so far, very happy with its ability paired with proxmox.
I use Unraid and I'm loving it. Super stable, easy to manage, set up dockers, let's me pool my hard drives and set up parity. Highly recommend. Only thing that I've had a hard time with is finding a stable flash drive - you'd be surprised how many start to fail when used 24/7
The thumb drive isn’t used all the time. I’ve been using a cheap USB drive that cost me like $12 several years ago, and haven’t had any issues yet. It’s been running constantly for the last year or two.
I had an issue recently where my usb drive was "disconnecting" which triggered unraid to give read errors and then panicking. I had checked though and it wasn't being regularly read or written to but still caused my whole server to crash. Changing usb drive has since fixed it, for now 😄
Unraid would be a very good choice for someone who is reaching out and asking this question. Debian can do the same but I suspect it’ll be easier to setup and manage on unraid.
Came here to suggest unraid as well. There are probably better options, but for a first timer, I can’t imagine a better solution. The ability to just add a hard drive to the array with virtually not configuration, as well as adding up to two parity disks is great. Caching is super easy too.
I've been running my stack on FreeBSD for a while now. I cannot recommend it enough; solid as a rock, no surprises. BSD license is different from GPL though, so some software cannot be migrated with the same name, but there are drop in replacements that are usually better anyway.
I did a quick search, looks like Proton has a WireGuard implementation, which is what I use. I use transmission for torrenting, and jellyfin for streaming
I'm currently playing with setting up a home server on an old PC, using Proxmox as the main OS and using LXC and VMs for the services, not fully set up yet (still working on figuring out reverse proxy to make my services available on the internet)
It's neat tho, and there's some helpful scripts for installing various containers and things online.
I would need that because I’m basically starting from zero with learning all this stuff lol. Using Tautulli remotely is a challenge for me right now if that gives any indication of my level of knowledge here
I've seen you mention this a few times and like mentioned elsewhere in here, set yourself a Tailnet up.
It's fugging brilliant, the docs are wrote by some very clever people (note, I am best described as a copy / pasta person?) and are through, and you can use a github or even a Google account for authentication.
Even grabbing a cheapo raspberry pi4 gives you a 1GB port (the rpi3 only has a 100Mbps rj-45 port and would still suffice for lesser needs) for your own VPN Wireguard to home, that is P2P encrypted and can be used as an Exit Node / subnet router
ie: if you're on someone else's internet/cellular you can simply hit up your exit node to break out of any nanny filters, stop anyone else noseying at your traffic (obv bar your ISP seeing outgoing requests unless you have a another...VPN on your router), and also view and/or manage any devices on your home network/Tailnet by IP address.
Hell, I dumped a rpi down at a family members house that is part of the "stack" so I can help out remotely but it seems someone has knocked the aerial out of the HAT again :/
I use Unraid on my NAS. I like it for storage, I don't like it for running services.
It's still running my media stack, but only until I get that moved to a Debian server.
Depending on how involved you want to be and what you want to learn, Unraid might be a good fit for you. It's easy and mostly just works.
Now that Truenas Scale supports just plain Docker (and it's running on Debian) I think it's a great option for an all-in-one media box. I've had my complaints with Truenas over the years, but it's done a really great job at preventing me from shooting myself in the foot when it comes to my data.
I believe raidz expansion is also now in stable (though still better to do a bit of planning for your pool before pulling the trigger).
The raidz stuff, as I understand it, seems pretty compelling. A setup where I can lose any given drive and replace it with no data loss would be very ideal. So I would just run TrueNAS scale, through which would manage my drives, and then install everything else in docker containers or something?
Yes, what you're saying is the idea, and why I went with this setup.
I am running raidz2 on all my arrays, so I can pull any 2 disks from an array and my data is still there.
Currently I have 3 arrays of 8 disks each, organized into a single pool.
You can set similar up with any raid system, but so far Truenas has been rock solid and intuitive to me. My gripes are mostly around the (long) journey to "just Docker" for services. The parts of the UI / system that deals with storage seems to have a high focus on reliability / durability.
Latest version of Truenas supports Docker as "apps" where you can input all config through the UI. I prefer editing the config as yaml, so the only "app" I installed is Dockge. It lets me add Docker compose stacks, so I edit the compose files and run everything through Dockge. Useful as most arrs have example Docker compose files.
For hardware I went with just an off-the-shelf desktop motherboard, and a case with 8 hot swap bays. I also have an HBA expansion card connected via PCI, with two additional 8 bay enclosures on the backplane. You can start with what you need now (just the single case/drive bays), and expand later (raidz expansion makes this easier, since it's now possible to add disks to an existing array).
If I was going to start over, I might consider a proper rack with a disk tray enclosure.
You do want a good amount of RAM for zfs.
For boot, I recommend a mirror at least two of the cheapest SSD you can find each in an enclosure connected via USB. Boot doesn't need to be that fast. Do not use thumb drives unless you're fine with replacing them every few months.
For docker services, I recommend a mirror of two reasonable size SSDs. Jellyfin/Plex in particular benefit from an SSD for loading metadata. And back up the entire services partition (dataset) to your pool regularly. If you don't splurge for a mirror, at least do the backups. (Can you tell who previously had the single SSD running all of his services fail on him?)
For torrents I am considering a cache SSD that will simply exist for incoming, incomplete torrents. They will get moved to the pool upon completion. This reduces fragmentation in the pool, since ZFS cannot defragment. Currently I'm using the services mirror SSDs for that purpose. This is really a long-term concern. I've run my pool for almost 10 years now, and most of the time wrote incomplete torrents directly to the pool. Performance still seems fine.
Depends on your experience, hardware, and other stuff.
You could easily use Debian or Ubuntu server and install Docker if all you want is those listed services installed on unRAIDed drives.
You could try something like Dietpi (which is what Ive used since I started self hosting) which simplifies a few things and gives some helpful scripts on top of a basic Debian installation. It's a simple setup but still just plain ol' Debian so easy to set up however you like.
You could use something like CasaOS or ZimaOS which offer Web interfaces and integrate with docker for those with a "no tech" background up to technical users.
ProxMox is an option, but takes a lot of learning proxmox-specific stuff and IMO might be a bit overkill for your first server.
Personally, I'd go for something accessible to your tastes because everything nowadays has some kind of "easy setup" path for Plex/Jelly + Arr. Once it's set up, use it! Then once you need a big change for better hardware or more bespoke software setups then start digging into more fancy setups.
I actually want to prioritise the data protection of some sort of RAID setup, and support for torrenting and whatnot would be secondary to that. Really what I’m trying to avoid is installing and setting up my system only to find out that the OS I’ve picked is terrible for torrenting afterwards.
I have a workable setup on consumer Windows 11 right now, so I see the next step as having a dedicated Media Server box which can give me plenty of storage, data protection (right now a drive failure would wipe out half my server), and room for future expansion. Once that’s sorted, then I’ll look into the Arr suite and more advanced torrenting stuff. I want to pick something good for that stuff now, though, so I don’t have a ton of headache down the road
I think there's some deffo better OSes than my suggestions for RAID setups and stuff, bar ProxMox. Maybe it is worth you looking into those options!
That being said, any OS can torrent shit just fine. If it can run Docker or other containers (so 99% of suggestions here) you're set.
Maybe if you can spare the hardware try setting up a RAID on a couple of different ISOs to test em. That'll be the harder, or more permanent, aspect of the setup I think.
I wouldn't use Arch on a Server. Everything you install will probably be in a docker container anyway, so fast updates for system packages isn't important compared to stability. Good choices would be Debian or Fedora Server. I personally use Fedora but the reason is just that I use Fedora on Desktop too, so I know they have really good defaults (They're really fast in adopting new stuff like Wayland, Pipewire, BTRFS with encryption and so on) and it's nice that Cockpit us preinstalled, so I can do a lot of stuff using a WebUI. Debian is probably more stable tho, with Fedora there is a chance that something could break (even though it's still pretty small) but Devian really just works always. The downside is of course very outdated packages but, as I said, on a Server that doesn't matter because Docker containers update independetly from the system.
I use Alma because RHEL is designed for enterprise stability. Debian is also a good option.
Just don't use Ubuntu. They do too much invisible fuckery with the system that hinders use on a server. For basic desktop use it's fine, but never for a server.
Edit: but you should be doing most stuff in Docker anyway, so the actual OS isn't going to matter too much. If you're already comfortable with one base (Debian, RHEL) just use that one or a derivative.
I wouldn't use Mint or other desktop-focused OS for a server. Ubuntu's advantage of newer packages gets largely negated by how long Mint takes to release a new major release, so I'd rather use Debian.
I do think Ubuntu is fine for servers too, like almost any other point release distro.
I assume any Linux or *BSD distro will work, especially one with Docker (which is most/all of them?) so you don't have to worry about things being packaged for your distro so long as there's a docker image. My server is Alpine Linux.
Like others in here, I also set mine up with Debian and docker compose. Since it's an always on server I wanted maximum stability. I don't use unRAID, so not sure about compatibility for that.
Unfortunately not in my setup, but that's just because I don't have the money to upgrade it at the moment and nearly everything I have is stuff I can easily redownload.
Once I can save up for it I will up my storage and get some back ups set up.
I'm sure any server oriented Linux distro will do fine. I use Debian.
I will note, I don't know if you're planning on having remote access (e.g. through tailscale or reverse proxy), but if you are, I found it quite a challenge to get proton to play nice with them
For newcomers I'd recommend docker and images like gluetun for setting up the VPN. It makes it easy to forward ports (for remote access) while keeping the torrent client behind the VPN.
For a while I split tunneled tailscale through an openvpn .conf file, but recently switched to using qbittorrent in docker with gluetun. Qbittorrent is realistically the only service that needs to be behind a vpn so it works out well
Just wanted to add that Wireguard is better than OpenVPN in every way and you should use that except when you want to use it for torrenting. I don't know remember the reason but that's the one time when you should be using OpenVPN. I think it had something to do with OpenVPN supporting TCP and Wireguard being UDP only or something like that.
Wireguard uses UDP which results in better latency and power usage (e.g. mobile). This does not mean Wireguard can't tunnel TCP packets, just like OpenVPN also supports tunneling UDP.
interesting. proton has example openvpn configs on their site which was hugely helpful to me. dunno if they have wireguard equivalents, or if those are needed.