Firm predicts it will cost $28 billion to build a 2nm fab and $30,000 per wafer, a 50 percent increase in chipmaking costs as complexity rises::As wafer fab tools are getting more expensive, so do fabs and, ultimately, chips. A new report claims that
The ratio of people who are capable of writing less-shitty software to the number of things we want to do with software ensures this problem will not get solved anytime soon.
The ratio of people who are capable of writing less-shitty software to the number of things we want to do with software ensures this problem will not get solved anytime soon.
Eh I disagree. Every software engineer I've ever worked with knows how to make some optimizations to their code bases. But it's literally never prioritized by the business. I suspect this will shift as IaaS takes over and it's a lot easier to generate the necessary graphs showing the stability of your product being maintained while the consumed resources has been reduced.
While I agree with the cynical view of humans and shortcuts, I think it’s actually the “automated” part of the process to blame. If you develop an app, there’s only so much you can code. However if you start with a framework, now you’ve automated part of your job for huge efficiency gains, but you’re also starting off with a much bigger app and likely lots of functionality you aren’t really using
And NVIDIA will use this as an excuse to hike up their prices by 100+%.
On a serious note, this will progressively come down in price as time passes, plus not everyone needs to use 2nm cutting edge technology. Plus transition to 2nm will also increase the density, so comparing wafer prices without acknowledging the increased density is not giving you the whole picture.
Plus DRAM scaling is becoming cumbersome and a lot more components cannot scale to 2nm, so 2nm is mostly a marketing term, and there are a lot of challenges that make this tech so expensive and difficult to design and produce.
2nm process doesn’t actually mean 2nm though. Hasn’t in over a decade.
The current 3nm process has a 48nm gate pitch and a 24nm metal pitch. The 2nm process will have a 45nm gate pitch and a 20nm metal pitch.
“Nm” is just “generation” today. After 5nm was 3nm, next is 2nm, then 1nm. They’ll change the name after that even though they’re still nowhere near actual nm size.
Yeah I’m a bit curious what the marketing will be as they have to get more vertical, 3D. Will there be naming to reflect that or will they just follow existing naming, 0.5nm?
This was my understanding as well: That beyond ~7nm the reliability begins to lose value because the diameter of an electron 'orbit' or whatever becomes a factor.
Admittedly I'm not an expert. But my understanding was that to break this limitation and keep Moore's law were kinda leaning into quantum computation to eventually fill the incoming void.
If it’s enough to run on-device ai, it’s a win. Imagine autocorrect being able to mangle your texting without ever connecting to the cloud. Huge prvacy win.
With the goggles coming soon, I think they’ll focus chip improvements on GPU and neural engine to better support that
Autocorrect doesn't send anything to the cloud, it's just a dictionary. If your keyboard is sending your texts to the cloud you have to change your keyboard, not run AI. AI doesn't do autocorrect, it could maybe do word suggestions but would be super inefficient at it and probably not much better than current methods.
I'm writing thins on a 22 nm CPU and the letter appear hella fast.
Quantum computing wouldn't make these transistors obsolete.
Quantum computing is only really good at very specific types of calculations. You wouldn't want it being used for the same type of job that the CPU handles.
Quantum computing is useless in most cases because of how fragile and inaccurate it can be, due in part to the near zero temperatures they are required to operate at.