I know, I know, clickbaity title but in a way it did. It also brought in the situation in the first place but I'm just going to deliberately ignore that. Quick recap:
I came home at 3pm from the city, my internet at home didnt work.
checked multiple devices, phones worked out of wifi, I figured I need to restart the router
I login to the router and it responds totally normal but my local network doesnt. (Its always dns, I know)
I check the router log and see 100s of login attempts over the past couple of days.
I panic and pull the plug, try to get into my server by installing an old monitor, works, many errors about dns
Wife googles with her phone, seems I had https login from outside on and someone found the correct port, its disabled now
Obviously, local network still down, I replug everything and ssh into the server which runs pihole as dns
pihole wont start dns, whatever I do
I use history and find I "chmod 700"ed the dns mask directory instead of putting it in a docker volume...
I check the pihole.log, nothing
I check the FTL log, there is the issue
I return it to 777, everything is hunky dory again.
Now I feel very stupid but I found a very dangerous mistake by having my lan fail due to a less dangerous mistake so I'll take this as a win.
Thanks for reading and have a good day! I hope this helps someone at some day.
If you have everything on docker compose migrating to another host is pretty easy. I could probably migrate my 11 stacks of 36 containers in 2 to 3 hrs
If they had access to a machine, the first thing you do is install some kind of root kit so you can get access again later. This could be as small as modifying an existing binary to do things it isn't supposed to do.
The owner was root and still is. I changed from 777 to 700 which broke everything. Sorry if that wasnt clear. I will switch to a docker volume to avoid having this crap in my hone folder in the future.
All you have to do to avoid this is just not open any ports except one for something like wireguard, and only access your network using it externally, and you will never have this problem.
Exactly. It wasnt on purpose either. I thought there was an additional layer of security, gullible as I was 5 yrs ago. They made it seem like there was.
One of my home servers was popped once, they stuck a new MOTD on there to let me know how foolish I was and I haven't made that mistake since. So... yay greyhat?
That's why I love Tailscale, nothing is open to the internet, all my shit is local lan inside Tailscale. Even better I don't have to bother with certificates and reverse proxy.
Reverse proxy isnt that hard tbh. Btw I have a vpn and my lan isnt open to the web. The router vendor made it look like there was an additional layer of security.
Not sure how reverse proxy is avoided this way --- do you enter port numbers for your services when you access them, or have one service per machine?
I have a few publicly accessible services, and a bunch of private services, but everything is reverse proxy'd --- I find it very convenient, as for example I can go to https://wap.mydomain.net for my access point admin page, or photos.mydomain.net for my Immich instance. I have a reverse proxy on my VPS for public services, and another one on my lan for private services; WireGuard between VPS, LAN, and my personal devices. Possibly have huge security holes of course...
Yep correct http://hostname:port por each application, all running in the same host on docker. The only thing it would be that any device that would want to connect to an app needs the Tailscale client. And would take over the VPN slot. That's why they offer exit nodes with mullvad and also DNS privacy resolvers like NextDNS.