When it comes to running Large Language Models (LLMs) and other intense artificial intelligence workflows locally, colossal desktop PCs powered by hyper-expensive NVIDIA GeForce RTX GPUs tend to reign supreme.
Even the best discrete GPU quickly runs out of VRAM under these conditions, though, and that’s where AMD‘s poorly…
