I'm thinking about starting a blog about privacy guides, security, self-hosting, and other shenanigans, just for my own pleasure. I have my own server running Unraid and have been looking at self-hosting Ghost as the blog platform. However, I am wondering how "safe" it is to use one's own homelab for this. If you have any experience regarding this topic, I would gladly appreciate some tips.
I understand that it's relatively cheap to get a VPS, and that is always an option, but it is always more fun to self-host on one's own bare metal! :)
I host my sites on a VPS. Better internet connection and uptime, and you can get pretty good VPSes for less than $40/year.
The approach I'd take these days is to use a static site generator like Eleventy, Hugo, etc. These generate static HTML files. You can then store those files on literally any host. You can stick them on a VPS and serve them with any web server. You could upload them to a static file hosting service like BunnyCDN storage, Github Pages, Netlify, Cloudflare Pages, etc. Even Amazon S3 and Cloudfront if you want to pay more for the same thing. Note that Github Pages is extremely feature-poor so I'd usually recommend one of the others.
This is a bit fuzzy. You seem to recommend a VPS but then suggest a bunch of page-hosting platforms.
If someone is using a static site generator, then they're already running a web server, even if it's on localhost. The friction of moving the webserver to the VPS is basically zero, and that way they're not worsening the web's corporate centralization problem.
I host my sites on a VPS. Better internet connection and uptime, and you can get pretty good VPSes for less than $40/year.
You seem to recommend a VPS but then suggest a bunch of page-hosting platforms.
Other comments were talking about pros and cons of self-hosting, so I tried to give advice for both approaches. I probably could have been clearer about thay in my comment though. I edited the comment a bit to try and clarify.
I have some static sites that I just rsync to my VPS and serve using Nginx. That's definitely a good option.
If you want to make it faster by using a CDN and don't want it to be too hard to set up, you're going to have to use a CDN service.
Self-hosted CDN is doable, but way more effort. Anycast approach is to get your own IPv4 and IPv6 range, and get VPSes in multiple countries through a provider that allows BGP sessions (Vultr and HostHatch support this for example). Then you can have one IP that goes to the server that's closest to the viewer. Easier approach is to use Geo DNS where your DNS server returns a different IP depending on the visitor's location. You can self-host that using something like PowerDNS.
Also, there are a LOT of sales during Black Friday. HostHatch usually have great Black Friday deals. Keep an eye on Lowendtalk.com forums.
I've got a few VPSes at GreenCloudVPS (in San Jose, California) and HostHatch (in Los Angeles, California) and they're both pretty good. I live near San Jose so I get <10ms ping to those VPSes :)
HostHatch is a bit better (their control panel is more powerful) but you'd have to wait for them have a sale, whereas GreenCloudVPS usually has good deals year-round.
I've used RackNerd in the past. They're good too, although I prefer GreenCloud and HostHatch.
I self-host everything from my home network including my website. I like to keep all my data local. 😁
It's a simple setup: just a static site made with Lume, and served with Caddy. The attack surface is pretty small since it's just HTML and CSS files (no JavaScript).
I wonder sometimes if the advice against pointing DNS records to your own residential IP amounts to a big scare. Like you say, if it's just a static page served on an up to date and minimal web server, there's less leverage for an attacker to abuse.
I've found that ISPs too often block port 80 and 443. Did you luck out with a decent one?
I wonder sometimes if the advice against pointing DNS records to your own residential IP amounts to a big scare. Like you say, if it’s just a static page served on an up to date and minimal web server, there’s less leverage for an attacker to abuse.
That advice is a bit old-fashioned in my opinion. There are many tools nowadays that will get you a very secure setup without much effort:
Using a reverse proxy with automatic SSL certs like Caddy.
Sandboxing services with Podman.
Mitigating DoS attacks by using a WAF such as Bunkerweb.
And of course, besides all these tools, the simplest way of securing public services is to keep them updated.
I’ve found that ISPs too often block port 80 and 443. Did you luck out with a decent one?
Rogers has been my ISP for several years and have no issue receiving HTTP/S traffic. The only issue, like with most providers, is that they block port 25 (SMTP). It's the only thing keeping me from self-hosting my own email server and have to rely on a VPS.
I have hosted a wordpress site on my unraid box before, but ended up moving it to a VPS instead. I ended up moving it primarily because a VPS is just going to have more uptime since I end up tinkering around with my homelab too often. So, any service that I expect other people to use, I often end up moving it to a VPS (mostly wikis for different things). The one exception to that is anything related to media delivery (plex, jellyfin, *arr stack), because I don't want to make that as publicly accessible and it needs close integration with the storage array in unraid.
I have a Hugo site hosted on GitHub and I use CloudFlare Pages to put it on my custom domain. You don't have to use GitHub to host the repo. Except for the cost of the domain, it's free.
I've been self-hosting my blog for 21years if you can believe it, much of it has been done on a server in my house. I've hosted it on everything from a dusty old Pentium 200Mhz with 16MB of RAM (that's MB, not GB!) to a shared web host (Webfaction), to a proper VPS (Hetzner), to a Raspberry Pi Kubernetes cluster, which is where it is now.
The site is currently running Python/Django on a few Kubernetes pods on a few Raspberry Pi 4's, so the total power consumption is tiny, and since they're fanless, it's all very quiet in my office upstairs.
In terms of safety, there's always a risk since you're opening a port to the world for someone to talk directly to software running in your home. You can mitigate that by (a) keeping your software up to date, and (b) ensuring that if you're maintaining the software yourself (like I am) keeping on top of any dependencies that may have known exploits. Like, don't just stand up an instance of Wordpress and forget about it. That shit's going to get compromised :-). You should also isolate the network from the rest of your LAN if you can. Docker sort of does this for you (though I hear it can be broken out of), but a proper demarcation between your laptop and a server on the Open web is a good idea.
The safest option is probably to use a static site generator like Hugo, since then your attack surface is limited to whatever you're using to serve the static sites (probably Nginx), while if you're running a full-blown application that does publishing etc., then that's a lot of stuff that could have holes you don't know about. You may also want to setup something like Cloudflare in front of your site to prevent a DOS attack or something from crippling your home internet, though that may be overkill.
But yeah, the bandwidth requirements to running a blog are negligible, and the experience of running your own stuff on your own hardware in your own house is pretty great. I recommend it :-)
Yes I host everything public with cloudflare tunnels. Everything more heavy is VPN with DDNS on invite basis to friends and fam. For the former it's Hassle-free HTTPS, no reverse proxy, no firewall, no nonsense.
could someone please point me to a "self-host-beginner-tutorial"? I had pretty good ICT-knowledge but when it comes to selfhosting my knowledge ends...
I'd say it boils down to what you see yourself hosting, what do you need/want? There are many great YT content creators out there documenting their experiences, tips and guides. HardwareHaven, Raid Owl, Jeff Geerling, Christian Lempa, TechnoTim and Wolfgang to mention a few.
JupiterBroadcasting has a wide variety of Podcasts dedicated to both selfhosting and linux stuff if that should peak your interest.
I use nginx as a reverse proxy with crowdsec. The backends are nginx and mariadb. Everything is running on Debian VMs or LXCs with apparmor profiles and it's all isolated to an "untrusted" VLAN.
It's obviously still "safer" to have someone else host your stuff, like a VPS or Github Pages, etc, but I enjoy selfhosting and I feel like I've mitigated most of the risk.
I self host a Wordpress site that mostly acts as my design portfolio.
It’s hosted in a Debian VM on a restricted VLAN with caddy handling SSL certificates. Uptime isn’t a huge concern for me since it’s nothing mission critical. It all sits behind a free Cloudflare proxy which allows for my home IP to be hidden.
I think as far as safety goes, I’m comfortable with this setup.
I self host my own website, blog, and a dozen privacy-friendly alternatives and front-ends to various web sites. I use a dedicated remote server for this, so nothing is on my own bare metal. netcup.de has a variety of VPS options that give you good hardware resources for your money. You can get a VPS with 8 GB of RAM, 4 core CPU, 256 GB disk, and 2.5Gbps network throughput for $6.33 a month (not including initial setup cost). Compared to what Vultr and Akamai offer for the same price, this is a steal. The company is based in Germany, so you have to convert the euro prices to US dollars if you're in the US. The only thing about netcup.de is that your options for the location of your server are limited. They have one US location and the rest are in Europe. This is not a dealbreaker for me, though. And they guarantee 99% uptime. I'm pleased with their service. If you just want to host your personal services on a more long term basis and don't care about scaling and deployment turnover, then netcup is great. Akamai, Digital Ocean, and Vultr are more for short term disposable, scalable VPSes or web apps and they have excellent data center availability.
yes: sntx.space, check out the spurce button in the bottom right corner.
I'm building/running it the homebrewed-unconventional route. That is I have just a bit of html/css and other files I want to serve, then I use nix to build that into a usable website and serve it on one of my homelab machines via nginx. That is made available through a VPS running HA-Proxy and its public IP. The Nebula overlay network (VPN) connects the two machines.
I am using a very generic ISP and they tend to have a dim view of running servers on their network.
I did have an RPi running SSH and a Mumble server directly connected to the internet years ago, but after a few years I realized that I was bringing needless attention to my network when I found my server on Shodan.
So many suggestions here but I thought I'd chime in because I have a setup very similar to what you suggested and I found a very easy way of hosting it securely. I am using Unraid on a system in my house. I have my web service running in a docker container. I exposed it using a cloudflare tunnel. There is an Unraid plugin for cloudflare tunnels that takes out a lot of the configuration work involved in getting it running locally. You just have to also set up a corresponding endpoint on Cloudflare's website and have a domain name registered with them for you to link to it.
The way it works then is when someone requests your domain (or subdomain) in their browser, Cloudflare gets the request and redirects the traffic to the cloudflare tunnel client app that you set up in your computer. That app on your machine then redirects the traffic to your other container that is hosting your web service and established bidirectional communication that way.
The benefits to this system are:
Relatively easy setup, especially if you want to expose more services in the future (you'll need to run a separate cloudflare container for each service exposed though)
No need to open ports in your router or firewall on your home network. Cloudflare just knows how to communicate between its server and its client app on your computer (I think you have to set up an access token so it is secure).
None of your users ever learn your home IP address because once they connect at Cloudflare's server, they don't get any more knowledge than that about what's on the other side.
It's free (not including the cost of registering your domain)
You don't have to worry about changing anything if your ISP randomly changes your IP address. Hell, you could even move to a new house and take your computer with you and you wouldn't have to reconfigure anything.
Downsides:
You have to trust that Cloudflare is not scraping all the traffic going through the tunnel.
Some people have a moral issue with giving Cloudflare more responsibility for hosting "the Internet". We already rely on their infrastructure heavily for large sections of the Internet. If they ever become malicious or compromised, there is a lot to lose as a society.
I believe you can use Wireguard and a rented VPS to recreate this setup without Cloudflare but it will require a lot more knowledge in order to set it up with more points of failure. And it would cost more because even though Wireguard is FOSS, a VPS will cost you a monthly fee of at least a few bucks per month.
I currently have 2 services exposed using Cloudflare tunnels on my Unraid system at home. They've been running for over a year now with 0 interruption.
Biggest problem will be BW and latency to your lab from the Internet. I would use dedicated hardware and subnet for it. Security wise, if you can make your site 100% static it will help a lot with security. I'm personally set on AWS S3 + CloudFlare combo with static site generator running in my lab. Yes it is not really "self hosted" but worries free solution for me.
I self host a Grav site among other things on a 15 Euro VPS.
Also, I started with Ghost but the fact that they locked up the newsletter side of business to a single provider and were unwilling to rework things at the time made me walk away. Yes, I know you could go code side, and add others, but that was a complicated setup in itself. Grav works perfectly for me.
I use a VPS and generate static sites using Hugo. Works fine.
I could host it in my network, but I don't see a point, and I'd really rather not have a power outage or loss of internet break my site (much more likely at home than at a datacenter). I host pretty much everything else within my network though.
Have some stuff on a VPS, some stuff hosted as static pages at Cloudflare, some stuff hosted at home too.
Depends on if 100% uptime is required, if they're just serving static content, or if they're in some way related to another service I'm running (I have a couple of BBSes, and the web pages that host the clients and VMs that host the clients run locally).
Though, at this point, anything I'm NOT hosting at home is kinda a "legacy" deployment, and probably will be brought in-house at some point in the future or converted to static-only and put on Cloudflare if there's some reason I can't/don't want to host it at home.
There’s nothing wrong with just using a VPS for this. Despite what some mouth-frothing hobbyists will tell you, it’s still well within the realm of self hosting. There’s just no reason or difference for hosting a blog on your UnRAID server vs a VPS.
If you really want to be some kind of purist and only use your own hardware, then you could configure a web server that can reverse proxy on your UnRAID server and forward port 443 in your router to your UnRAID box, but you’d have to change your UnRAID access port to something else. You’d want to keep this web server docker container up to date, and preferably see if you can implement some kind of WAF with it or in front of it. You’d then forward the requests from this web server to your ghost container.
A better idea would be to use a different piece of hardware for this web server reverse proxy, like a raspberry pi or something, and put it on a different subnet in your house. Forward 443 to that, then proxy the connection back to UnRAID, in whatever port you bind the ghost container to. Then you can tighten access that raspberry pi has. Or hell, host the blog on that hardware as well and don’t allow any traffic to your main LAN.
There are half a dozen better ways to do this, but they all require you to rely on a third party service to some extent.
Yea depends on your website bandwidth/uptime requirements. I use a VPS running nginx and wireguard, and tunnel into that from a VM in my homelab, so no ports are open on my home firewall. nginx drops all random traffic at the VPS that isn't destined to a preconfigured service, expected traffic is forwarded through the wireguard tunnel to the right VM's, segregated from the rest of my home network by VLANs. I host a bit of web content where I'm not concerned with bandwidth or uptime really, as well as home assistant, file browser, a few dedicated game servers, etc.
I self hosted many websites for about 20 years, but sadly I had to take it all down this year. In the process of moving to another state. Also going to really miss my 1gbps unlimited fiber connection.
I hosted my websites from windows server 2003, 2008, virtual machines, Linux, and other ways. It was fun times. I have very good up time using 2 servers and UPS battery backups.