In the EU, as long as it's under 800W it can be plugged directly into an outlet in your home without any kind of installation, back-feeding the grid that way.
You're not getting paid anything for the power you send back into the grid so anything you don't use you lose.
And that's probably what will kill them as payouts get worse and worse making other platforms more attractive as you're not losing as much. A lot of YouTubers I follow seem to becoming more and more reliant on Patreon as ad revenue goes down.
I have a cheap noname chinese switch with 2x10gbit ports and 4x2.5 Gbps ports, so I have the 10 Gbit ports to the internet and my computer, and use a 2.5 gbps port for my NAS, everything else is 1 gbit
Technically, those 100+ Gbps fiber LAN/WAN connections used in data centers are also Ethernet, just not twisted pair.
That said recently I was in a retail store and saw "Cat8" cables for sale that advertised support for 40 Gbps copper ethernet! I wonder if any hardware to support that will ever be released. It is a real standard, approved way back in 2016: https://en.wikipedia.org/wiki/100_Gigabit_Ethernet#40GBASE-T
I have symmetrical 10 Gbps at home ($30/mo) and I'll agree. When it's nice when you have big updates, for most households 1 Gbps is going to be just fine. As you say, the vast majority of users are bottlenecked by Wi-Fi.
The bigger crime are all the asymmetrical connections that people on technologies like Cable TV networks have, where you get 1-2 Gbps down but only something tiny like 50 Mbps up. This results in crappy video calls, makes off-site/remote backups unfeasible, means you can't host anything at home, etc.
Most residential fiber globally currently is GPON with a 1-2 Gbps shared line using passive optical splitters, split up to 32 ways. Raising that shared line to 50 Gbps is a great upgrade.
It's for running AI on the GPU. You need a really expensive PC GPU to get more than like 16 GB of RAM or whatever, so the bottleneck for large AI models is swapping in and out data from system RAM over PCIe.
The Mac has an SoE with unified memory, where the GPU can access all 192 GB at full speed, which is perfect for AI workloads where you need the GPU to access all the RAM. There's always a tradeoff where the PC GPUs have faster processors (since they have a way bigger power budget), but the Mac GPU has faster memory access, so it's not always a slam-dunk which is better.
APUs/Integrated GPUs on PCs also have unified memory but they always targeted the low end so aren't as useful.
APFS still supports resource forks just fine - I can unstuff a 1990's Mac application in Sequoia on a Apple Silicon Mac, copy it to my Synology NAS over SMB, and then access that NAS from a MacOS 9 Mac using AFP and it launches just fine.
The Finder just doesn't use most of it so that it gets preserved in file copies and zip files and such.
He's explicitly said he wants to make "X" the China-WeChat-style "super app" of the west that you need to have installed to do anything and everything, and that includes payments https://en.wikipedia.org/wiki/Super-app
The TV app on my iPhone also shows content from Hulu so I think it's across all Apple platforms