I’ve just released Gatekeeper 1.6.0. It’s a single executable that turns any Linux machine into a home gateway. Now with realtime traffic graphs, LAN autoconfiguration, full cone NAT and better looks.
DHCP & DNS server optimized for home gateways. Contribute to mafik/gatekeeper development by creating an account on GitHub.
Hi all home network administrators :) Haven't posted anything here since June, when I told you about Gatekeeper 1.1.0. Back then it was a pretty bare-bones (and maybe slightly buggy) DNS + DHCP server with a web UI with a list of LAN clients. Back at 1.1.0 Gatekeeper didn't even configure your LAN interface or set up NAT rules. It used to be something like dnsmasq - but with a web UI.
I've been improving it for the past couple of months - simplified it a lot, fixed bugs, optimized internals, improved the looks & added a bunch of quality-of-life features. Now it's a program that turns any linux machine into a home internet gateway. It's closer to OpenWRT in single executable file.
One big thing that happend was that I've attempted to replace the ubiquitous nft-based NAT (where the kernel processes the packets according to a rule-based script) with nfqueue (where the kernel sends the packet to userspace so that it can be altered arbitrarily). I've expected this to be buggy & slow. Well, it was hell to implement but it turns out that it's not slow at all. On the debug build my router can push 60GiB+ / second over TCP (over virtualized ethernet of course). And I'm not even using any io_uring magic yet. Quite honestly I don't even know how to explain it since it's slightly above the peak DDR4 transfer rate (I'm running dual channel DDR4-3200). Maybe the pages are not flushed to RAM & are only exchanged through CPU caches? Anyway I'm pretty excited because userspace access to all traffic opens a lot of new possibilities...
The first thing is NAT. By default Linux only supports symmetric NAT, which is pretty secure but is also fairly hard for the peer-to-peer protocols to pierce. There are some patches that make Linux full-cone but they're not, and are not expected to become a part of the mainline kernel (at least according to OpenWRT forums). Now, since we have access to every packet we can take care of this ourselves. We can create a couple hash tables to track connections, alter the source & destination IP, recomputing the checksums if necessary. Suddenly we can have full-cone NAT, on any linux machine, without patching the kernel! At runtime it's not as configurable as netfilter + conntrack but it's a whole lot simpler - since now we can use a general purpose programming language rather than netfilter rules.
Another cool feature that we can now have are truly realtime traffic graphs. Summary of each packet traversing the network boundary is immediately streamed to the connected web UIs over WebSocket. This is way faster than the alternatives based on reading some /proc/ or /sys/ files every couple of seconds. Gatekeeper also aggregates the traffic from the last 24 hours between each pair of hosts into a histogram with 100ms resolution and allow clients to view it, scroll through it, compute stats, download as JSON or CSV. You can retroactively check which device talked with what IP, at what time with unprecedented resolution.
My next step is going to be capturing the traffic that goes through into a 5MB circular buffer (separate buffer for each LAN client) & downloading it as Wireshark-compatible pcap files. Computationally it's almost free. IoT devices usually don't transmit a lot of data. 5MB may actually cover months of traffic for the simpler ones. If any device is did anything weird, it will finally be possible to investigate it - even after it already happened.
Long-term Gatekeeper could do even more. For example offer assistance in setting up TLS MITM, perform some online grouping / analysis on the live traffic.
I still have some ground work to do - like automatically setting up Wireless LAN, bridgidng multiple interfaces into a single one and I think there may be a bug that causes crashes when checking GitHub for updates. But I wanted to share it sooner rather than later. I hope that despite its imperfections some of you will find it useful!
(I've had some issues with cross-instance posting. This is attempt #6)
If you are lacking ideas for the super long term I could suggest you:
Any kind of ids/ips (intrusion detection system)
Deep inspection packet to detect any vpn or crypto tunnel
Ability to create a vpn link to another instance of the program (to link geographical disperse nodes)
And many other things that honestly I am ashamed of asking :)
I don't think I fully understand how this is supposed to work. Am I correct in that this service acts as the gateway on the network? So clients would be configured to see this as the gateway, and this then subsequently pushes things through the real gateway (router)? Thus essentially everything (most things?) that the router does would be done by this instead?
More features, yes. Easier? No. This project is aiming to be simpler than an OPNsense setup at the cost of advanced features.
According to the GitHub it can act as a router, OR you can leave your ISP router and have it act as a DNS and DHCP gateway. Please read the documentation before saying negative things in the comments.
My questions exactly. Furthermore ISP router typically also have wifi, so wifi clients on that won't be even visible to this... please clarify is this is supposed to be yet another openwrt / router solution or what you're doing here... BUT I fucking love the UI :)
Right there with you on the UI. This would overlap in functionality with a lot of other items in my network, but I'm trying to find a reason to use it just so I can play with the UI.
"Generally speaking Gatekeeper needs to sit between your LAN network and the internet. It can either completely replace the router provided by ISP, or sit between the ISP router and your LAN network."
This looks quite simply astonishing! I don't have the need to re-do my home router but if my mom's goes at any point in the future this looks like a great way to stay FOSSy.
I will not blame you, since it seems you spent a lot of time on this project.
What does this project actually solve? It seems like you are trying to re-implement ISP's router, just being open source.
From my understanding, it's either user who has no idea how stuff works and will not look into this project, or user knows enough how this stuff works and will use OpenWRT router/rpi or Mikrotik. I don't understand - why would someone pick your project over RPI/OpenWRT/Mikrotik router?
In terms of types of users I agree with what you're saying but I also think that there are some shades of gray in between. There are people who love to tinker and would manually configure every service on their router, compiling everything from scratch, reading manuals, understanding how things work (they'll probably choose dnsmasq, systemd-networkd, graphana over Gatekeeper). In my experience this approach is pretty exciting for the first couple of years & then gradually becomes more and more troublesome. I think Gatekeeper's target audience are the people who would like to take ownership of their network (and have some theoretical understanding) but don't want to fully dive into the rabbit's hole and configure everything manually.
In terms of problem solved: I agree that Gatekeeper solves a similar problem. I think it's different from those projects because it tightly integrates all of the home gateway functions. While this goes against the Unix philosophy, I think it creates some advantages:
Possibility of cross-cutting features.
Better performance (lower disk usage, lower RAM usage, lower CPU load).
Seamless integration.
Functions of home routers are conventionally spread out over many components (kernel & a bunch of independently developed userspace tools) which talk to each other. Whenever we want to create a cross-cutting feature (for example live traffic graphs) we must coordinate work between many components. We need to create kernel APIs to notify userspace apps about new traffic, create userspace apps to maintain a record of this traffic & a web interface to display it. It's difficult organizationally. In a monolith, where all code is in one place, such cross-cutting features can be developed with less friction.
From the performance point the conventional approach is also less efficient. The tools must talk to each other. Quite often through files (logs & databases). It's wearing down SSDs & causing CPU load that could be otherwise avoided. A tightly integrated monolith needs to write files only periodically (if ever) - because all data can be exchanged through RAM.
From the complexity standpoint the conventional approach is also not great because each of the tools needs to know how to talk with the others. This is usually done by administrator, configuring every service according to its manual. When everything is built together as a monolith, things can "just work" and no configuration is necessary.
Edit: Please don't be offended by my verbosity. From your question I see that you know this stuff already but I'm also answering to the fresh "selfhosted" audience :)
Yeah, if I wanted a router, I'd just use opnsense. This feels like it's in that weird middle ground between doing one thing well and being a swiss army knife, where it does a lot of things, but when there's something else you need it's not there and you can't easily add it.
Kind of a good thing to have something in between, but the problem for the author - there wouldn't be a lot of consistent users. What if user starts needing VLANs? Or VPN? Or VPN as a client? The user would migrate to other tools.
Unfortunately explicit, stable port redirections is something that is still missing. I'll have to implemnt them (with a proper UI) eventually because under the hood they are also a necessary building block for other features. At the moment there are only "ephemeral" port redirects which may be sufficient for you. They are created automatically when a LAN machine sends out a packet from some source port. That port is then implicitly forwarded back to that machine. This is actually a part of the "Full Cone NAT" thing.
This can be triggered manually for example with something like:
nc -p 80 1.2.3.4 1234 # send a dummy TCP packet from port 80
Ephemeral port redirections don't expire but can be taken over if another LAN host also uses the same source port for outgoing traffic. This may happen randomly because source ports are usually picked at random by the OS. Generally ports below ~32k should be fairly stable because Linux doesn't use those by default (I don't know about Windows). Redirecting ports below 1024 should be even more stable because they're reserved for specific well-known services.
What makes the port redirections difficult to implement in the code? I’m imagining the kernel has some way of handling this without too many external libraries but I’m not well-versed enough on this to know for sure.
So you're not remapping the source ports to be unique? There's no mechanism to avoid collisions when multiple clients use the same source port? Full Cone NAT implies that you have to remember the mapping (potentially indefinitely—if you ever reassign a given external IP:port combination to a different internal IP or port after it's been used you're not implementing Full Cone NAT), but not that the internal and external ports need to be identical. It would generally only be used when you have a large enough pool of external IP addresses available to assign a unique external IP:port for every internal IP:port. Which usually implies a unique external IP for each internal IP, as you can't restrict the number of unique ports used by each client. This is why most routers only implement Symmetric NAT.
(If you do have sufficient external IPs the Linux kernel can do Full Cone NAT by translating only the IP addresses and not the ports, via SNAT/DNAT prefix mapping. The part it lacks, for very practical reasons, is support for attempting to create permanent unique mappings from a larger number of unconstrained internal IP:port combinations to a smaller number of external ones.)
This looks like a really cool project, thanks for sharing it!
I just have one question related to Ephemeral port redirections; why would you want to keep the translation pinned up if the client doesn't need it anymore?
Wouldn't it make sense to honor normal protocol timeout thresholds and reap the seasons periodically? I am probably misunderstanding, but it feels like you're describing stateful routing with infinite persistence.
I understand that your goal is to learn something new.
In my opinion ambitious, goal-oriented projects may either backfire or turn you into a legend. There will be many issues along the way and while they are all ultimately solvable, the difficulty may kill your motivation. Alternatively, if you manage to power through, then after some period of learning (potentially years) and keeping the fixation on specific problem you might emerge as a domain expert. Either way it's a risky bet.
If I might leave some advice for newcomers it would be to learn how to perform some simple tasks & focus on creating projects that you're confident can be built from things you already know. Over time you'll increase the repertoire of tasks that you can perform, and therefore be able to build increasingly advanced projects.
Very interesting project, thanks for sharing and working on this. I am actually one of your target user, where I have enough knowledge to implement my own router, at the moment running on gentoo.
I would like to use this but it lacks port forwarding and a firewall, that is a must. I'll try it out nevertheless. I'm quite impressed by the stylish HTML graphics, and I appreciate your departure from the typical "modern" gray corporate Bootstrap UI design. It's really, really cool.
One question. how do you envision exposing this service to the internet? I quite despise rust but I wonder if the use of a memory safe language would help with the inevitable bugs, especially if you put even more features into gatekeeper.
Thank you for the feedback! I have to admit I wasn't aware of how important port forwarding is. Stepping back I guess I'll need a better way of gauging how important specific features are to people. I'll have to think about this a little bit more...
Your question about security is something that I think about a lot. I don't think of LAN & internet as significantly different in terms of security. I also worry about potentially malicious LAN devices attempting to exploit local DNS, DHCP or web UI. I've profesionally worked on anti-malware and I've seen malware preloaded on new phones by factory workers & resellers, suspiciously exploitable flaws in stock firmware (which I guess was a backdoor with plausible deniability), fake monetization SDKs that are actually botnets (so application developers have been unknowingly attaching bots to their apps). There is also the problem of somebody gaining the physical access to your LAN network (for example by connecting a prepared device to an ethernet port for a couple of seconds). While those things may seem far fetched and commercial routers ignore them, I'd like to do something better here.
In terms of preventing C++ footguns, I'm relying on compilation arguments (-fstack-protector, -D_FORTIFY_SOURCE=2), safe abstractions (for example std::unique_ptr, std::span, std::array...), readability (single-threaded, avoiding advanced primitives or external libraries) & patience (I think that time pressure is the biggest source of bugs).
In terms of protocol level security, so far I've been able to secure the update path (so that MITM attackers can't inject malicious code). The web UI is a big problem for me because to do any privileged operations I'll have to authenticate the user first. Firstly I'm not exactly sure how to even do that. Password seems like the best option but I'm still trying to think of something better. There is this new WebAuthn thing which I'll have to look into. Second issue with web UI is that I need to protect the authentication channel. This means that local web UI will need TLS. And this in turn means that I'll have to somehow obtain a TLS cert somehow. Self-signed certs produce nasty security warnings. Obtaining one from LetsEncrypt seems easy - assuming the router has public IP (which may not always be the case). But even if I obtain a LetsEncrypt cert, any LAN device can do the same thing, so the whole TLS can still be MITM-ed. It would be really great if web browsers could "just establish encrypted channel" and not show any security warnings along the way...