Earlier in my career, I compiled tensorflow with CUDA/cuDNN (NVIDIA) in one container and then in another machine and container compiled with ROCm (AMD) for cancerous tissue detection in computer vision tasks. GPU acceleration in training the model was significantly more performant with NVIDIA libraries.
It's not like you can't train deep neural networks without NVIDIA, but their deep learning libraries combined with tensor cores in Turing-era GPUs and later make things much faster.
I'm holding out building a new gaming rig until AMD sorts out better ray-tracing and cuda support. I'm playing on a Deck now so I have plenty of time to work through my old backlog.
last I heard AMD is working on CUDA working on their GPUs and I saw a post saying it was pretty complete by now (although I myself don't keep up with that sort of stuff)
Man I just built a new rig last November and went with nvidia specifically to run some niche scientific computing software that only targets CUDA. It took a bit of effort to get it to play nice, but it at least runs pretty well. Unfortunately, now I'm trying to update to KDE6 and play games and boy howdy are there graphics glitches. I really wish HPC academics would ditch CUDA for GPU acceleration, and maybe ifort + mkl while they're at it.
Yes. Haha. Amusing. But really....Blender, Davinci Resolve, and a host of others. It's not a hobby, it's quite literally a (albeit small) portion of my income.
And it has nothing to do with anime nudes.
Anime/waifu is literally for pedos wanting a loophole. Sorry, not sorry.