Where does Microsoft's NPU obsession leave Nvidia?
Where does Microsoft's NPU obsession leave Nvidia?
Where does Microsoft's NPU obsession leave Nvidia?
It's really surprising that Microsoft doesn't see the presence of a powerful GPU as being enough for "Copilot" certification.
As things stand today, you can do a lot more (consumer-facing) ML tasks with a GPU than any of the NPUs (which tend to have very weak support for things like local LLM, ML video-upscaling, local image AI gen, videogame upscaling.
I don't understand this post. A desktop 4070 has 1:1 FP16 which results in less than 30TOPs. MS requires a minimum of 45TOPs for a device to be co-pilot certified, that's why they're not certified. Worse, the limited memory pool make any NV laptop card apart from a 4090 a difficult sell.
I am referring to practical use cases.
For example how fast would a 45 TOPs NPU ML upscale a 10 min SD video source to HD (takes about 15 min with 3080 + 5800X). What video upscaling frameworks/applications have support for such NPUs?
Another example would be local LLMs. Are there any LLMs comparable to say llama 3.1 1B that can be run be locally via NPU?
To my knowledge there are no video gaming upscaling tech (comparable to DLSS) that can be run off a NPU.
Nvidia’s gonna be just fine.
Copilot, on the other hand…