Quote from: Ja on Yesterday at 16:30:17why put only 16gb on your flagship card when 24+slightly wider bus are maybe $50-100 more expensive?
They're gonna miss out on the entire LocalLLaMA crowd, people that buy 4 gpus at once for their home server and then go ahead and buy a hundred in the company they work at if they found the cards good enough.
also, considering that datacenters will start moving away from Ampere nobody will even bother with them.
lovely strategy by AMD..
Because they want to push people towards the more expensive hardware that is purpose built for AI.
AMD has been treating their consumer AI stuff like crap.
Even if dev support of rocm has improved, AMD's attitude towards consumer hardware has not.
Issues include:
1. DirectML works crap on windows forcing you into 3x less performance
2. On Linux you get much better performance, but they don't support the latest kernels forcing you into either using older hardware, recompile your own kernel. They can fix the issue easily, it is a few minutes of work but they don't want to fix it until they pin themselves to the next kernel version whichever that would be
3. Their igpus cause their dgpus to break in AI (how many companies punish people for buying more of their brand's hardware?)
4. Instead of just versioning stuff, they pretty much cut off old hardware forcing them into weird workarounds that can easily be fixed
5. This is on top of the fact that many software require a lot more hacks and effort going to get rocm working. It is getting better but still quite more painful than it needs to be