I’m beginning to think that the Windows PC that I built in 2015 is ready for retirement (though if Joe Biden can be president at 78, maybe this PC can last until 2029?). In looking at new des…
Why aren’t motherboards mostly USB-C by now?::I’m beginning to think that the Windows PC that I built in 2015 is ready for retirement (though if Joe Biden can be president at 78, maybe this PC can last until 2029?). In looking at new des…
So, much as I hate to admit it, the real reason for this is bandwidth. Lets look at the best case scenario without dipping our toes into server grade hardware. AMD CPUs tend to have more I/O bandwidth allocated than Intel, so we'll take the top of the line desktop AMD CPU as of right now, the Ryzen 9 7950X (technically the X3D version is the actual top of the line, but that makes certain tradeoffs and for our purposes in this discussion both chips are identical).
On paper, the 7950X has 24 PCIe 5.0 lanes, and 4 on board USB 3.2 ports on its built in USB controller. So already we could have a maximum of 4 type-C ports if we had no type-A ports, however in practice most manufacturers opt to split the difference and go with 1 or 2 type-C ports and the remaining 2 or 3 ports as type-A. You can have more USB ports of course, but you need to then include a USB controller on your motherboards chipset, and that in turn needs to be wired into the PCIe bus which means taking up PCIe lanes, so lets take a look at the situation over there.
We start with 24 PCIe lanes, but immediately we're going to be sacrificing 16 of those for the GPU, so really we have 8 PCIe lanes. Further, most systems now use NVMe M.2 drives, and NVMe uses up to 4 PCIe lanes at its highest supported speed. So we're now down to 4 PCIe lanes, and this is without any extra PCIe cards or a second NVMe drive.
So, now you need to plug a USB controller into your PCIe bus. USB 3.2 spec defines the highest supported bandwidth as 10 Gbps. PCIe 5.0 defines the maximum bandwidth of a single PCIe lane as a bit over 31 Gbps. So the good news is, you can successfully drive up to 3 USB 3.2 ports off a single PCIe 5.0 lane. In practice though USB controllers are always designed with even numbers of ports, typically 2 or 4. In the case of 4, one lane isn't going to cut it, you'll need at least 2 PCIe lanes.
I think you can see at this point why manufacturers aren't in a huge rush to slap a ton of USB type-c connectors on their motherboards. With a modern desktop there's already a ton of devices competing for limited CPU I/O bandwidth. Even without an extra USB controller added in it's already entirely feasible to come dangerously close to completely saturating all available bandwidth.
They don’t all have to be high speed. For example, we already see a distinction in USB-A based on things like power and data speed. I don’t see why anyone would be surprised at a similar arrangement for USB-C. Let me have my low speed keyboard and mouse ports, my low power watch charging port
While that is true, it does cause some headaches for end users. There's a (barely followed) code for differentiating the speeds of type-A connectors, but I'm not aware of any such system for type-C. Generally people expect a type-C connection to be full USB 3.2 or USB-4 speeds (not to mention the absolute state of the USB spec with them changing the nomenclature constantly). If you started putting USB 2.0 ports with type-C connectors you'd quickly find people complaining about that I'm sure.
Really, in the long term I'm sure in another CPU generation or two we'll have enough bandwidth to spare that manufacturers can start putting extra USB 3.2 or USB 4 controllers on the motherboards at which point they'll be able to replace most of the type-A ports with type-C without losing speed. In practice though I expect we'll see history repeating itself with "low" speed type-C ports and high speed type-C ports that support whatever the latest and greatest USB spec is, and no doubt some kind of distinguishing mark to differentiate them. We already see something like that with lightning, although that's just a little too proprietary to really cut it, we'll need to get something that's part of the USB spec itself.
Almost none of the alternate modes or advanced features are required for USB-C devices. Most smartphones don't support high data rates over their single USB-C port. There are are probably more USB-C ports using the USB 2.0 specs, for example peripheral devices like mice or keyboards. Beyond stuff like DisplayPort alternate mode, there still isn't a big demand for more than one or two USB-C ports with high data rates or the full feature set.
I think power delivery is a concern too. If a motherboard had 4 USB-C ports on it, you know someone would try to plug in 4 USB-C monitors at 100W (20V/5A) each, so 400W going across your IO bus. At that point if your motherboard doesn't just burn out, and you have a big enough power supply to provide it, you're still going to have a serious heat problem.
Yeah I recently started using a motherboard that has a 6-pin GPU style header for powering the USB-C ports. It limits power delivery capacity if you don't plug the connector in, but if you do it supports 100W ports.
No it limits the total amount, but it is reasonable that they added a dedicated power input. I'm guessing we'll see even more of that on ATX12VO motherboards or similar. Seems like power standards are changing a lot and manufacturers are waiting for things to settle down.
I think it's easy to say this, but harder to actually do in practice. There's a color code system for USB-A, but a lot of manufacturers didn't follow it reliably, and most users don't know what the differences are anyway (I'd certainly have to look up what Yellow and Red are specifically for). You'd have the same problem with trying to mark USB-C ports, and without some easily identifiable marking most users will just expect that a USB-C port is a USB-C port.
Nah, they usually advertise one USB-C port with full speed and that is the only one who gets it, even if it has 2, 3 or even 4x.
Btw, the DeskMini is the only full-spec PC i know of, which doesn't use additional chipsets for I/O. There may be a few more boards like this, dunno, but additional I/O chipsets are incredibly common.
Isn't this glossing over that (when allocating 16 PCIe lanes to a GPU as per your example), most of the remaining I/O connectivity comes from the chipset, not directly from the CPU itself?
There'll still be bandwidth limitations, of course, as you'll only be able to max out the bandwidth of the link (which in this case is 4x PCIe 4.0 lanes), but this implies that it's not only okay but normal to implement designs that don't support maximum theoretical bandwidth being used by all available ports and so we don't need to allocate PCIe lanes <-> USB ports as stringently as your example calculations require.
Note to other readers (I assume OP already knows): PCIe lane bandwidth doubles/halves when going up/down one generation respectively. So 4x PCIe 4.0 lanes are equivalent in maximum bandwidth to 2x PCIe 5.0 lanes, or 8x PCIe 3.0 lanes.
edit: clarified what I meant about the 16 "GPU-assigned" lanes.
Typically no, the top two PCIE x16 slots are normally directly to the CPU, though when both are plugged in they will drop down to both being x8 connectivity.
Any PCIE x4 or X1 are off the chipset, as well as some IO, and any third or fourth x16 slots.
So yes, motherboards typically do implement more IO connectivity than can be used simultaneously, though they will try to avoid disabling USB ports or dropping their speed since regular customers will not understand why.
Typically no, the top two PCIE x16 slots are normally directly to the CPU, though when both are plugged in they will drop down to both being x8 connectivity.
Any PCIE x4 or X1 are off the chipset, as well as some IO, and any third or fourth x16 slots.
I think the relevant part of my original comment might've been misunderstood -- I'll edit to clarify that but I'm already aware that the 16 "GPU-assigned" lanes are coming directly from the CPU (including when doing 2x8, if the board is designed in this way -- the GPU-assigned lanes aren't what I'm getting at here).
So yes, motherboards typically do implement more IO connectivity than can be used simultaneously, though they will try to avoid disabling USB ports or dropping their speed since regular customers will not understand why.
This doesn't really address what I was getting at though. The OP's point was basically "the reason there isn't more USB is because there's not enough bandwidth - here are the numbers". The missing bandwidth they're mentioning is correct, but the reality is that we already design boards with more ports than bandwidth - hence why it doesn't seem like a great answer despite being a helpful addition to the discussion.