Quote from: Flabs and such on April 17, 2026, 21:14:31It's a 10 year-old GPU, not completely useless but definitely not gaming material at all.
Must be a slow news day in tech when every news site starts broadcasting whatever idiotic fad some influencer desperate for attention came up with.
I've 4 of these for inference for about $200 (know where to look/wait for discounts) they're quite capable at 10-14 tokens per second for 8K context on 70B or better MoE with CPU offload for higher quants. Having four means they have 32GB of GPU RAM in total for inference for a single LLM model.
Admittedly I'm running this in an old Xeon which allows this many GPUs on PCIe, and may not be able to get this many on a desktop CPU, but either way, Geforce 1070TI level performance and low power draw is still pretty much a bargain, especially when you can't even get 8GB of RAM for $50.