Does "Selfhosted" mean you actually have a server at home?
I'm trying to better understand hosting a Lemmy Instance. Lurking discussions it seems like some people are hosting from the Cloud or VPS. My understanding is that it's better to futureproof by running your own home server so that you have the data and the top most control of hardware, software etc. My understanding is that by hosting an instance via Cloud or VPS you are offloading the data / information to a 3rd party.
Are people actually running their own actual self-hosted servers from home? Do you have any recommended guides on running a Lemmy Instance?
Well technically a "server" is a machine dedicated to "serving" something, like a service or website or whatever. A regular desktop can be a server, it's just not built as well as a "real" server.
Yeah I'd stay away from Mac too... but seriously most modern laptops can disable any sleep/hibernation on lid close
My go to lately is Lenovo tiny, can pick them up super cheap with 6-12 month warranties, throw in some extra ram, a new drive, haven't had any fail on me yet
Proxmox is like esxi, it lets you setup virtual machines. So you can fire up a virtual Linux machine and allocate it like 2gb ram and limit it to 2 cores of the CPU or give it the whole lot depending on what you need to do
Having them in a cluster let's them move virtual machines between the physical hardware and have complete copies so if one goes down the next can just start up
It is a little overkill, I'm probably only using about 20% of its resources but it's all for a good cause. I'm currently unable to work due to kidney failure but I'm working towards a transplant. If I do get a transplant and can return to work, being able to say "well this is my home setup and the various things I know how to do" looks a lot better than "I sat on my ass for the last 4 years so I'm very rusty"
This whole setup cost me about $1000aud and uses 65-70w on average
Docker/kubernetes and VMS are similar in that they are all virtualisation but the similarity kinda end there. Love them or hate them, Each has its own important role in IT infrastructure.
First off, docker itself needs a host operating system to run. Secondly, Docker are containers. Each image is built on a cut down version of the operating system generally to perform one specific task or run one specific application. The environment is preconfigured to work exactly as intended so generally speaking, you don't get the whole "but it works on my machine"
Kubernetes I'm not the most qualified to speak to, but pretty much someone said "ok docker is great but we want redundancy, scalability, etc" and made kubernetes.
A vm is a full virtual machine. You can give it virtual harddisks, virtual network cards, etc. You then install a full operating system on it, could be windows or Linux or whatever you need.
From there you can install docker if that's what you want, or can install specific apps. This is the first difference, is if you install the app compared to a docker container, you need to make sure you have all the prerequisites met, all the correct compatibility, etc. It's up to you to make sure your system is correct for the software.
Another major difference is docker containers are all seen on the network as coming from whatever the host machine's IP is.
Whereas the network views each vm as it's own device on the network, giving each it's own IP (if using dhcp) and allowing things like vlans and things.
As for my setup, I have 3 VMs with docker servers, each with between 20-30 docker containers, 3 VMs running adguard DNS, 1 vm acting as a tailscale entry point, then a few application specific VMs. It's handy just being able to fire up a blank Ubuntu instance to play with me software, and if anything goes wrong just delete the whole machine and start fresh.
Then for storage behind it all, I have a qnap ts453d with 4x 8tb drives.
Then outside my home, I have 2 X Oracle hosted VMs, one hosting about 22 websites and all the stuff they need, one acting as a tunnel into my home services since I'm behind a CGNAT, and then another physical server located in the local data centre running email for a few small businesses and myself
Only if you've got it cranking all day. I've got a couple of Tiny (they're Micro, which is the same thing) systems that are silent when idle and nearly silent when running less than a load avg of 5. It's only if I try to spin up a heavy, CPU-bound process that their singular fan spins fast enough to be noticable.
So don't use one as a Mining rig, but if you want something that runs x64 workloads at 9-20 watts continuously, they're pretty good.
The right way (tm) is to have the application deployed with high availability. That is every component should have more than one server serving it. Then you can take them offline for a reboot sequentially so that there's always a live one serving users.
This is taken to an extreme in cloud best practices where we don't even update any servers. We update the versions of the packages we want in some source code file. From that we build a new OS image contains the updated things along with the application that the server will run and it's ready to boot. Then in some sequence we kill server VMs running the old image and create news ones running the new. Finally the old VMs are deleted.
Actually I am lazy with updates on the "bare metal" debian/proxmox. It does nothing else than host several vm's. Even the hard disks belong to a vm that provides all the file shares.
First, you need a use-case. It's worthless to have a server just for the sake of it.
For example, you may want to replace google photos by a local save of your photos.
Or you may want to share your movies accross the home network. Or be able to access important documents from any device at home, without hosting them on any kind of cloud storage
Or run a bunch of automation at home.
TL;DR choose a service you use and would like to replace by something more private.
Proxmox absolutely changed the game for me learning Linux. Spinning up LXC containers in seconds to test out applications or simply to learn different Linux OSs without worrying about the install process each time has probably saved me days of my life at this point. Plus being able to use and learn ZFS natively is really cool.
Ive been using esxi (free copy) for years. Same situation. Being able to spin up virtual machines or take a snapshot before a major change has been priceless. I started off with smaller nuc computers and have upgraded to full fledged desktops.
Well, there are specific hardware configurations that are designed to be servers. They probably don't have graphics cards but do have multiple CPUs, and are often configured to run many active processes at the same time.
But for the most part, "server" is more related to the OS configuration. No GUI, strip out all the software you don't need, like browsers, and leave just the software you need to do the job that the server is going to do.
As to updates, this also becomes much simpler since you don't have a lot of the crap that has vulnerabilities. I helped manage comuter department with about 30 servers, many of which were running Windows (gag!). One of the jobs was to go through the huge list of Microsoft patches every few months. The vast majority of which, "require a user to browse to a certain website" in order to activate. Since we simply didn't have anyone using browsers on them, we could ignore those patches until we did a big "catch up" patch once a year or so.
Our Unix servers, HP-UX or AIX, simply didn't have the same kind of patches coming out. Some of them ran for years without a reboot.