Prices will never come down to previous gen card prices. We’ve passed the threshold. NVidia will keep their prices high because there is significant demand for their chips outside of gaming and AMD follows in lockstep.
This is probably true, but it doesn't mean that individual, non-AI/crypto consumers have to accept it, and largely, they haven't been. All it takes is for Nvidia/AMD stock to drop from the overinflated prices for prices to come down.
NVIDIA has been struggling in recent years to find use cases for their graphics cards. That's why they're pushing towards raytracing, because rasterization has hit its limit and people no longer need to upgrade their GPU for that (they tried pushing towards 8k resolution, but that's complete BS for screens outside of cinemas). However, most people don't care about having better reflections and indirect lighting in their games, so they're struggling to get anywhere in the gaming market. Now NVIDIA is moving into other markets for their cards that don't involve gamers, and they're just left as an afterthought.
I don't think that this will ever change again. Games like DOTA, Fortnite and Minecraft are hugely popular, and they don't need raytracing at all.
I personally tried going towards fluid simulations for games, because those also need a ton of GPU resources if calculated at runtime (that was the topic of my Master's thesis). However, there have barely been any games featuring dynamic water. It's apparently not interesting enough to design games around.
Just remember you get Starfield for free so it's kind of like a $70 dollar savings. I personally wanted the card plus the game so it really works out as a win win. I think $430 for the 7800 XT sounds reasonable for, someo who hasn't upgraded since 2016 and currently has a rx480. It might be a midrange card but I honestly don't know what more I could ask for if it runs everything max at 1440 and most thinks 4k are 60fps or over.
As someone who dropped $550 on a 1080 (not Ti) years ago and still using it for VR, I could be tempted to go AMD. Nvidia has gone off the deep end with pricing and I can't see myself going that route. I'm starting to hit some bottleknecks and I'm sure I'll upgrade in the next 3 years.
Too bad amd has always lagged in VR performance. Especially if you were trying to do wireless quest, I think the encoding latency was quite a bit higher.
I use an AMD GPU and I stream to my Quest 2 a lot. I've only had one app have issues ever. It was Google Earth VR which I understand that they quit developing. I have never noticed latency either..
I nabbed a 6800 XT for $550 last fall also for VR and the same one's even cheaper now (but the 7800 XT probably has it beat assuming the same or greater performance)
I’m in the same situation and at the current overpriced frame cost I’ll be waiting either for the next gen or the secondhand market once the next gen drops or just keeping the card another year.
“I just wish there were an option between the $300 to $400 marks that offered enough performance to push us firmly into the 1440p era.” That was my colleague Tom Warren’s conclusion reviewing the $399 Nvidia RTX 4060 Ti and $269 AMD Radeon RX 7600.
The company claims both cards can average over 60fps in the latest games at 1440p with maximum settings and no fancy upscaling tricks — including troubled PC ports like The Last of Us Part I and Star Wars Jedi: Survivor.
AMD says FSR 3 is already slated for Cyberpunk 2077, Forspoken, Immortals of Aveum, Avatar: Frontiers of Pandora, Warhammer 40,000: Space Marine 2, Frostpunk 2, Squad, Starship Troopers: Extermination, Black Myth: Wukong, Crimson Desert, and Like a Dragon: Infinite Wealth.
That way, you’ll be able to inject extra frames into any DX10 or DX11 game with your AMD graphics card, no developer support required.
“Our research tells us 70 percent of customers are willing to compromise on image quality,” says AMD gaming chief Frank Azor.
AMD will sell its RX 7800 XT reference design directly at AMD.com with the two-fan cooler you see in the render atop this post.
The original article contains 771 words, the summary contains 194 words. Saved 75%. I'm a bot and I'm open source!
I think engineers and programmers need to think far more carefully about how commodity 16GBs of VRAM at 500 GBps + 20TFlop parallel machines can improve their programs.
There's more to programming than just "Run Tensorflow 5% faster".
GPUs are specialized computers used to accelerate "Matrix Multiplication". Traditionally, matrix-multiplication was used to calculate 3d objects vs the camera / screen... a very common operation in video games. There's a bit of math and study involved in this, but this picture should give you the idea:
GPUs are designed to perform this operation trillions-of-times per second, because video games have a lot of objects on the screen that move around (rotate / animate, etc. etc.), that needs to be simulated in this manner. So any video game will have a powerful, dedicated computer specifically designed for this matrix-multiplication operation.
About... 15ish years ago, someone created the "Tensor" operation for Deep Learning Neural Networks. This allowed ANNs to be written as a matrix-multiplication problem, and therefore accelerated on a GPU. GPUs after all, are the fastest computers we got for Matrix Multiplication.
GPUs are used in Machine Learning to train Neural Networks. Everyone in the AI field uses NVidia GPUs. Also NVidia uses special machine learning cores in the GPU, Tensor cores, to accelerate Ray Tracing and DLSS. AMD does not have these type of chips on their GPU that’s why ray tracing performance on AMD cards is significantly slower.