Skip Navigation

Are people interested in multi-client stress tests on routers?

I asked this same question on Reddit and I got zero engagement, so perhaps Lemmy has people that care more about their hardware.

I recently decided to use some of the tools provided by Mr Salter (netburn) and I have to ask the community if you want to see multi-client stress tests (4K streaming, VoIP, web browsing) used on a wireless router or if the single-client iperf tests are good enough. Bear in mind that pretty much all publications that still test their devices (most don't) rely on the single-client test method.

20

You're viewing a single thread.

20 comments
  • For a consumer/household: This is almost entirely unnecessary. Basically any halfway competent name brand router/firewall will have no problems with this. You are more likely to see issues coming from your wifi network (which is probably part of your "router"), but that is also an incredibly situational one depending on your environment (how many neighbors, etc). But LAN->WAN NAT is a "solved problem" as it were and you mostly just want to stress test that speed wise.

    For enterprise/hotels? Yeah, that is when you are going to have issues with too many clients. And the answer to that is almost always "buy enterprise hardware" rather than "figure out which netgear router I can tape to the ceiling"

    More data is always fun (unless you are the one collecting it...) but I just don't see much benefit from this. And most of the suggestions in this thread are really just ISP tests.

    • Oh, I agree wholeheartedly that collecting the data is not that much fun, especially since yes, I will have to do it. But I think users may benefit to see if the non-enterprise wireless routers can accomplish a certain task. For example, can that expensive Netgear router actually handle four client devices streaming 4K at the same time? What if we add browsing in the mix? The point of this thread was to get an idea if it's actually worth running these tests (which take quite a bit) and if people are interested in seeing this type of data on the web.

      • For example, can that expensive Netgear router actually handle four client devices streaming 4K at the same time

        Can your ISP? If so, yes. Because ~25 Mbps * 4 is not a lot of data. And the NAT for four clients mapped to the same firewall/router is pretty trivial. And no, adding "browsing" is not going to be an issue.

        Again, NAT is easy. And it happens on every single packet (big ol' asterisk on this, but not the venue to get into the specifics), regardless of whether it is one client or two. So what matters is the amount of packets per second that can be processed which these speed tests already cover (albeit, somewhat obfuscated because most people don't understand the network layers).

        And in the enterprise case? That is mostly about whether you can run a mesh network, what signal coverage you have, and the total number of clients (and packets) that need to be processed per second. Which... you are either a complete sicko who wouldn't be watching reviews online or you are just going to buy a ubiquiti or omada setup.

        • Can your ISP? If so, yes. Because ~25 Mbps * 4 is not a lot of data. And the NAT for four clients mapped to the same firewall/router is pretty trivial. And no, adding “browsing” is not going to be an issue.

          On paper, it is not a lot of data, but then adding more clients requesting 25Mbps continuously and then adding some spontaneous, but intensive web browsing can lead to latency spikes. And the user no longer gets a good streaming/browsing experience. I've even seen it on an expensive (by consumer-based networking standard) router such as the GT-AX6000.

          So what matters is the amount of packets per second that can be processed which these speed tests already cover (albeit, somewhat obfuscated because most people don’t understand the network layers.

          I am just trying to better understand this stuff, so I have to ask if seeing how long it takes for a client device to accomplish a certain task wouldn't be better than just glancing over the average Mbps in a graph? That's what most publications are showing.

          • Those "lag spikes" are almost always a result of content servers, load balancing, and possibly even client system resources. Or even just a crappy modem.

            A quick overview of an average LAN is:

            • Clients connect to access points. This might be wifi, it might be an ethernet cable
            • Those access points connect to Network Switches
            • Those Switches connect to something that handles routing of packets. This is literally a router, but that term is overloaded so note the lowercase 'r'
            • Eventually, everything connects to a Firewall which uses NAT to make it look like everything is one giant computer. This gets a bit weird when you move on to ipv6 but that breaks every bit of software at this point so whatever.
            • Your Firewall then connects to a Modem which then connects to the fat internet tubes.

            A "Router" generally handles everything up to the Modem. For a consumer/household, this is fine. Because no matter how many streams of Frasier you have running on your desktop, you aren't actually generating that much traffic. Even a server isn't going to generate that much traffic (unless you are speccing it out specifically with multiple NICs and so forth). Hell, your OS is more likely to fall over before you make any decent Router break a sweat. Your crappy Netgear can likely handle a LOT more than the god awful piece of crap modem Comcast rents out.

            The difference between consumer and enterprise is how many computers are involved. Because yes, if you add enough clients you can stress things. But in that case, we are talking closer to hundreds of clients than three or four kids who are on their phone and their tablet at the same time. Which, again, you are either a network sicko building a bespoke solution or you just pay the Ubiquiti tax.

            (And, as an aside, it usually isn't even the network hardware that falls over in hotels. It is their captive portal. And it very much violates a lot of the terms of staying at a hotel and can be considered "hacking" for legal reasons but... if you know the trick to forcing a reboot you can usually fix the network for the entire hotel for the next couple days).

            And if we then consider the WAN (internet), the usual path these days is to then connect to a Content Delivery Network (CDN) that is effectively a bunch of small relatively local servers that mirror other parts of the internet. Cloudflare is probably the most famous. And a lot of those "Prove you are a human" checks are kind of masking the fetching of data (which doubles as a way to protect against DDOS attacks). This is almost definitely where those "lag spikes" came from, not your hardware.

            I am just trying to better understand this stuff, so I have to ask if seeing how long it takes for a client device to accomplish a certain task wouldn’t be better than just glancing over the average Mbps in a graph? That’s what most publications are showing.

            That is an incredibly user specific review. That doesn't necessarily make it bad, but "how fast can I download an episode of Frasier" provides a subset of the amount of information you get from "what speeds does this router support?"

            But it sounds like what you want is a "review" of a full network (hardware) stack. And... that is again, not something you can get online. That is what you literally pay someone to come over and check out your building for. Because your wifi? That is going to be heavily impacted by where you place the access point, what you have in your walls, etc. Same with your modem (almost always a piece of crap) and even how many times the comcast tech spliced off your coax before it even gets to the box.

            Because, for a router? What matters is a controlled-ish environment and then how many packets it can process per second. And just measuring average speed over a large file transfer is probably the best way to get that as it normalizes all the CDN and stack shenanigans.

            Adding "more clients" mostly just lets you find out how your traffic is being routed to said CDN while not providing much more data than just a sustained high speed transfer would.


            In case it is not obvious, I am definitely a home networking sicko who decided an enterprise level solution was the cost effective way to set up a mesh network for wifi coverage (and... it will come out cheaper when I upgrade my access points in a few years). I've never found a need to test "number of clients" because I know that even with my complete mess oh a DHCP table, it is nothing. What I do do is local transfers of large files between clients on different setups. So I will connect my laptop to the wifi and download some files from my NAS. Then I'll do the same with a wired connection to a few different Switches. And then I'll just have Steam download a game or two to make sure my modem isn't a piece of crap.

            • You're talking about real-world scenarios, but I am just trying to get something simulated that resembles general real-life conditions. So, no CDN, the modem and even the ISP don't matter in this particular scenario. You have written a phenomenal response, so I am sorry that I ask to take some more of your time and check out this article: smallnetbuilder.com/wireless/wireless-reviews/2x2-ac-access-point-roundup-part-2/ This is pretty much what I am trying to accomplish and it does seem that the APs can be stressed by fewer client devices than expected. or maybe again, there's something that I am missing.

              • What you linked to is literally someone doing the kind of survey you pay a professional for (or do yourself). It is multiple clients running literal stress tests. Because yes, those are designed to represent website requests... except they are done near constantly for five minutes. That will never happen in the real world between caching of resources and people generally wanting to at least look at a web page before loading the next one. And mostly boils down to "packets per second" but in a way that provides much less data in terms of what was actually being tested. it is simulating an enterprise network load in a manner that is very prone to quirks of the hardware (they even mention their wifi dongles weren't properly supported in linux) while drawing conclusions that actually are pretty suspect (the idea of needing to refresh the page because of errors CAN happen but is generally unlikely due to cached resources and the resiliency of codecs for media streaming. Most of the time, those "the page didn't load right" are the CDN).

                Same with the roaming tests and the like. Yes, it is nice looking data but mostly it boils down to being INCREDIBLY situational and, honestly, not useful unless you live in that dude's office.

                I don't know that site very well. But, to me, this looks like a lot of data spam that can be summarized as "If you are dealing with enterprise level traffic, get an enterprise solution" while also having a LOT of affiliate links to buy the hardware.

                In a lot of ways, this reminds me of the computer hardware review channels. The better ones just play a suite of games and give you data from that because that conveys most of the useful information while being a realistic scenario. Gamers Nexus deserves an extra shout out (as they almost always do) for actually explaining why each game was used and reminding people of things like "Hitman 3 is a good bloatware test" because of the quirks of those games. But there are the ones who want to flood the consumer with nonsense data because it overloads their brains while coming to the same conclusion but being more "authoritative". It is one of the reasons I actually love that when Jays Two Cents does a stress test they repeatedly emphasize "This will never happen to your computer in reality. We are doing this to stress test our cooling solution".

                In fact, I would go so far as to say that reviews like this becoming ubiquitous would actually make the product space worse. We have already seen it happen. When people started discovering that mesh networks exist, there was a lot of interest. And many tech channels (I don't want to JUST call out LTT but... I am gonna call out LTT because they always do this bullshit) reviewed enterprise equipment, particularly Ubiquiti. And that more or less led to the idea that you either buy a shitty netgear router for your dorm or you buy an enterprise solution. Which means there is no product that is good for 95% of consumers anymore. You either have trash or really expensive overkill (although, I AM a fan of TP-Link's Omada approach as that is very much built out of consumer grade hardware at the low end). Because nobody needs feature X if they aren't running a hotel but... are you really going to buy something that scores lower because it doesn't have it?

            • Well, I have a MiniPC running a VyOS router and the only time I've seen it even break a sweat is when I have to run openvpn or wireguaurd with lots of throughput which is probably because of the encryption involved.

              I'm curious as to how the wireless access points part of the network work. I have no problem saturating my bandwidth on wired connections but on wireless, I do get choking when say 3-4 devices try to stream 4k.

You've viewed 20 comments.