I have two machines running docker. A (powerful) and B (tiny vps).
All my services are hosted at home on machine A.
All dns records point to A.
I want to point them to B and implement split horizon dns in my local network to still directly access A. Ideally A is no longer reachable from outside without going over B.
How can I forward requests on machine B to A over a tunnel like wireguard without loosing the source ip addresses?
I tried to get this working by creating two wireguard containers.
I think I only need iptable rules on the WG container A but I am not sure.
I am a bit confused about the iptable rules needed to get wireguard to properly forward the request through the tunnel.
What are your solutions for such a setup?
Is there a better way to do this?
I would also be glad for some keywords/existing solutions.
Additional info:
Ideally I would like to not leave docker.
Split horizon dns is no problem.
I have a static ipv6 and ipv4 on both machines.
I also have spare ipv6 subnets that I can use for intermediate routing.
Tailscale maybe?
They have a mode where you can configure a site to site links, you could route the docker networks.
https://tailscale.com/kb/1019/subnets
You could try using ssh reverse proxy and proxy the port to the vps.
Another way is to setup wireguard on the vps, connect the powerfull machine to it and keep it always connected there. ( This isn't really a good options since then all traffic is moved thrkught the vps )
There is also grok I think that's the name.
In general I think ssh reverse port proxy would be a decent way and then you can use a reverse proxy on the vps like nginx or caddy ( you need one that works on the host network )
I was hoping for a solution which allows for other protocols not just https and http. I will take a closer look at grok.
A ssh tunnel could work. I didn't think of that. I will have to test how this interacts with docker but I think it must be setup directly on the host.
I don't think the ssh tunnel limitation applies since the service will still be reachable from As local network. Speed might be a concern but I will have to test.
You can do this with a site-to-site wireguard VPN. You will need to set up the proper routing rules on each termination. On the Internet facing side you will want to do DNAT (modifies destination, keeps source) to redirect the incoming traffic to your non- internet facing side through the tunnel. Then on the non- internet facing you need to set up Routing rules to ensure all traffic headed for public IPs is traversing the tunnel. Then back on the Internet facing side you need to SNAT (modify source, keep destination) the traffic coming through the tunnel headed for the Internet. Hopefully this helps. People saying this goes against standards are not really correct as this is a great application for NAT.
Awesome! I'm glad I could help. Good luck! I've been spending quite a bit of time figuring out how to get this to run alongside other services. I think I just need to add an extra iptables rule to ignore port 443 so https requests will go through traefik first.
Keeping the source IP intact means you'll have troubles routing back the traffic through host B.
Basically host A won't be able to access the internet without going through B, which could not be what you want.
Here's how it works:
On host A:
add a /32 route to host B public IP through your local ISP gateway (eg. 192.168.1.1)
setup a wireguard tunnel between A and B
host A: 172.17.0.1/30
host B: 172.17.0.2/30
add a default route to host B wireguard IP
On host B:
setup wireguard (same config)
add PAT rules to the firewall so to DNAT incoming requests on the ports you need to 172.17.0.1
add an SNAT masquerade rule so all outbound request from 172.17.0.1 are NATed with host B public address.
This should do what you need.
However, if I may comment it out, I'd say you should give up on carrying the source IP address down to host A. This setup I described is clunky and can fail in many ways. Also I can see no benefits of doing that besides having "pretty logs" on host A. If you really need good logs, I'd suggest setting up a good reverse proxy on host B and forwarding it's logs to a collector on host A.
It might help if you described what you're actually trying to forward ... and why the source IP matters.
ZeroTier would be my recommendation personally (it does what tailscale does but it's been doing it longer and you can use whatever IP ranges you need vs some public IPv4 address space TailScale pools from).
Allow me to cross-post my recent post about my own infrastructure, which has pretty much exactly this established: lemmy.dbzer0.com/post/13552101.
At the homelab (A in your case), I have tailscale running on the host and caddy in docker exposing port 8443 (though the port matters not). The external VPS (B in your case) runs docker-less caddy and tailscale (probably also works with caddy in docker when you run it in network: host mode). Caddy takes in all web requests to my domain and reverse_proxies them to the tailscale hostname of my homelab :8443. It does so with a wildcard entry (*.mydomain.com), and it forwards everything. That way it also handles the wildcard TLS certificate for the domain. The caddy instance on the homelab then checks for specific subdomains or paths, and reverse_proxies the requests again to the targeted docker container.
The original source IP is available to your local docker containers by making use of the X-Forwarded-For header, which caddy handles beautifully. Simply add this block at the top of your Caddyfile on server A:
replacing the first IP with the gateway in the docker network, and the second IP with the "virtual" IP of server A inside the tailnet. Your containers, if they're written properly, should automatically read this value and display the real source IP in their logs.
The thing is that if you could (without circumventing the standards) do so then that implies that IP isn't actually a unique identifier, which is needs to be. It would also mean circumventing whitelists / blacklists would be trivial (it's not hard by any means but has some specific requirements).
The correct way to do this, even if there might be some hack you could do to get the actual source IP through, is to put the source in a 'X-Forwarded-For' header.
As for ready solutions I use NetBird which has open source clients for Windows, Linux and Android that I use without issues and it's perfectly self-hostable and easy to integrate with your own IDP.
The reason I want to preserve the IP is mostly for fancy graphana plots and tracability.
X-Forwarded-For is great but only works for http/https.
Also I would like to keep the https termination on machine B.
If you can fool the Internet that traffic coming from the VPS has the source IP of your home machine what stops you from assuming another IP to bypass an IP whitelist?
Also if you expect return communication, that would go to your VPS which has faked the IP of your home machine. That technique would be very powerful to create man in the middle attacks, i.e. intercepting traffic intended for someone else and manipulating it without leaving a trace.
IP, by virtue of how the protocol works, needs to be a unique identifier for a machine. There are techniques, like CGNAT, that allows multiple machines to share an IP, but really it works (in simplified terms) like a proxy and thus breaks the direct connection and limits you to specific ports. It's also added on top of the IP protocol and requires specific things and either way it's the endpoint, in your case the VPS, which will be the presenting IP.