I recently found this on Reddit while looking into why jellyfin is effected so much by latency. I found that this worked and thought I would share it because it is generally applicable, takes five minutes to setup, and helps a lot with bandwidth on higher latency connections. I admit I am not sure of the technical stuff behind this, so if anyone would like to chime in that would be much appreciated.
Bottleneck Bandwidth and Round-trip propagation time (BBR) is a TCP congestion control algorithm developed at Google in 2016. Up until recently, the Internet has primarily used loss-based congestion control, relying only on indications of lost packets as the signal to slow down the sending rate. This worked decently well, but the networks have changed. We have much more bandwidth than ever before; The Internet is generally more reliable now, and we see new things such as bufferbloat that impact latency. BBR tackles this with a ground-up rewrite of congestion control, and it uses latency, instead of lost packets as a primary factor to determine the sending rate.
According to multiple debian based and ubuntu based and Arch I use. No. Not default. Cubic still is.
My experience was that some days ago I was trying to make my UDP faster, but turned out found out about BBR - for TCP. Well, lucky me - currently some country away from home for family reason. Plex generally takes 40-80s to start a movie/episode for me. And measly about 10s max buffer available - and this is on a 3-5Mbps show.
After BBR (note I have to apply on Proxmox host, my container are unprivileged and can't set this themselves), I got 8-30s max to start a show/movie. And now comfortably sit between some good minutes on buffer. 15-20Mbps quality now playable.
To me personally it was black magic, and I was tossing it in just 2 days ago too
List some issues, but only earlier version, not BBR3
BBR1:
researchers like Geoff Huston and Hock, Bless and Zitterbart found it unfair to other streams and not scalable.
Hock et al. also found "some severe inherent issues such as increased queuing delays, unfairness, and massive packet loss" in the BBR implementation of Linux 4.9.
Soheil Abbasloo et al. (authors of C2TCP) show that BBRv1 doesn't perform well in dynamic environments such as cellular networks.
I believe it will result in like 10% additional overhead, which may be bad on metered connections, but I am not aware of any situation that results in decreased performance. I don't really know much about this so if anyone would like to correct me, please do.
The gist is that instead of only throttling upload rate based on packet loss, BBR constantly measures roundtrip delay (ping) to determine how much bandwidth is available.
Cool writeup. I remember implementing BBR many years ago when I was trying to bypass the Great Firewall for an extended stay. Helped deal greatly with the huge congestion on Chinanet backbone at the time, but it's less of an issue these days now that foreigners can use CN2.