Huawei and SMIC quietly rolled out a new Kirin 9000C processor.
Huawei and SMIC quietly rolled out a new Kirin 9000C processor.
Chinese foundry SMIC may have broken the 5nm process barrier, as evidenced by a new Huawei laptop listed with an advanced chip with 5nm manufacturing tech — a feat previously thought impossible due to U.S sanctions.
“7 nm”, “5 nm”, etc are all lies at this point. They mean nothing, just marketing terms. Nothing actually measures 5 nm. Idk what they’re doing to make the processors better, but it’s not shrinking the transistors as the name implies.
And since it’s a marketing term, there’s no objective way to measure it.
The term "5 nm" has no relation to any actual physical feature (such as gate length, metal pitch or gate pitch) of the transistors being 5 nanometers in size. According to the projections contained in the 2021 update of the International Roadmap for Devices and Systems published by IEEE Standards Association Industry Connection, a "5 nm node is expected to have a contacted gate pitch of 51 nanometers and a tightest metal pitch of 30 nanometers". However, in real world commercial practice, "5 nm" is used primarily as a marketing term by individual microchip manufacturers to refer to a new, improved generation of silicon semiconductor chips in terms of increased transistor density (i.e. a higher degree of miniaturization), increased speed and reduced power consumption compared to the previous 7 nm process.
I haven't really dived into the IEEE standards here, but this is a real thing and not purely marketing. It is the resolution of the lithography. Due to the higher clock speeds and current resolutions, the capacitance and inductance of the elements set the size of transistors and routing, rather than just how small you can pattern them. So it is not directly increasing transistor density and processing power, but it means that you are patterning elements more accurately to what you designed and have less variation element to element. Meaning that things can be optimized and more consistent.
I admit the misfortune of most of this things being abstract and partially derived from the surrounding components, but we're also nearly into the territory that the last mile of improvements require the entire computer (e.g. RAM/wifi/GPU/disk) must all be embedded into the final chip.
We probably still have at least two big performance leaps left before we have to either abandon silicon based transistors or the FET based logic gate. The next big thing Intel, TSMC, Samsung, and IBM are all working on is replacing the FinFET transistor we currently use for logic with Gate All Around Field Effect Transistors (GaaFET). Everyone will stay on gate all around for maybe 5 to 7 years and after we have squeezed out all possible performance optimizations, most roadmaps point to something like non-planar designs with multiple logic gates vertically stacked atop each other (CFET) being the next evolution. After that, maybe FET based logic gates on silicon will have finally hit their limit and a new material like germanium will be adopted... or we might just replace FETs all together with new forms of logic gates based on a novel mechanism.
but we're also nearly into the territory that the last mile of improvements require the entire computer (e.g. RAM/wifi/GPU/disk) must all be embedded into the final chip.
We already went there, all modern CPUs (Intel, AMD, ARM) are true SOCs (systems on a chip) where the components that used to be discreet (south bridge, north bridge, memory controller, clock generators, io/storage/network/usb controllers) are now integrated on the same silicon as the CPU cores themselves. The latest generation chips from both Intel and AMD are even leveraging 3D integration (vertically stacking modules on top of each other) to squeeze out that last bit of extra performance while maintaining the same physical footprint. It's at the point where they are 3D stacking up to a gigabyte of SRAM based L3 cache directly on top of CPU cores or putting up to 128GB of HBM ram directly on the CPU package to act as an L4 cache.
Some people assume only CPUs built for mobile devices (phones, laptops) are full SOCs but desktop/server CPUs that get socketed into a motherboard at this point are also true SOCs. Modern desktop and server motherboards tend to be nothing more than power delivery components and a physical jig that makes it easy to plug peripherals into the CPU SOC or connect multiple CPU SOCs together on one board. Other than increasing the bandwidth of interconnects, there is still very real performance that can be gained by lowering the latency between CPUs and plug in peripherals like memory DIMMs, discrete GPUs, network adapters, or other CPUs via replacing electrical traces with photonics. Intel's lab division has been working on maturing silicon photonics to the point it can be directly integrated into a CPU.
that's interesting. I think it does still have to do with the amount of power being used for the chip? Like the Steam Deck LCD is 7nm i think and the new one i believe is 5nm, which doesnt make it faster it makes it more efficient and the battery lasts longer as a result.
Looks like they call it “6 nm” which is unusual. Yeah something is improved in the chip, but the marketing isn’t any kind of objective measure of that. Although I imagine most of the improved battery life comes from the bigger battery and more efficient OLED screen.
at the same voltage, smaller transistors don't necessarily lower power because it's easier for electrons to jump across the smaller gaps and leak power that way. but the smaller size usually enables reductions in voltage and shorter wires mean less losses to heat, which entails lower power draw.
Academics use smallest pitch distance to measure resolution. It is a real thing it's just not measuring the size of the transistors themselves (which would vary by design anyways and also that's not the only tuing you can microfabricate).