News:

Willkommen im Notebookcheck.com Forum! Hier können sie über alle unsere Artikel und allgemein über Notebook relevante Dinge disuktieren. Viel Spass!

Main Menu

Post reply

The message has the following error or errors that must be corrected before continuing:
Warning: this topic has not been posted in for at least 120 days.
Unless you're sure you want to reply, please consider starting a new topic.
Other options
Verification:
Please leave this box empty:

Shortcuts: ALT+S post or ALT+P preview

Topic summary

Posted by lol
 - August 17, 2024, 12:28:13
Quote from: A on August 17, 2024, 05:36:10The market for AI that gpu vendors are most interested in is servers for precision workloads.
For training*
Quote from: A on August 17, 2024, 05:36:10Nobody is buying a gpu to do quantesized models.
Reality - models are very rarely used non-quantized. If you think ChatGPT isn't quantized your are very naive.
Quote from: A on August 17, 2024, 05:36:10Those were often just done on the cpu.
Yeah right. Please run 6bit quantized 70B LLM on your cpu. Or even better, just don't get into conversation if you know nothing.
Posted by A
 - August 17, 2024, 05:36:10
Quote from: captcha on August 16, 2024, 20:59:38Well M3Max GPU is around 4070 mobile just as expected.
But M3Max NPU is on par with 4090 in quantized models (the ones you will be using locally).

What's surprising is iPad M4 NPU and 8 Gen3 NPU are absolutely destroying 4090 in quantized models.

So NPUs are the next big thing and 4090s for local AI inferences will be fading away. Nice, new insights of what's what are always welcome.

The market for AI that gpu vendors are most interested in is servers for precision workloads. Nobody is buying a gpu to do quantesized models. Those were often just done on the cpu. And now NPUs will make that more efficient
Posted by captcha
 - August 16, 2024, 20:59:38
Well M3Max GPU is around 4070 mobile just as expected.
But M3Max NPU is on par with 4090 in quantized models (the ones you will be using locally).

What's surprising is iPad M4 NPU and 8 Gen3 NPU are absolutely destroying 4090 in quantized models.

So NPUs are the next big thing and 4090s for local AI inferences will be fading away. Nice, new insights of what's what are always welcome.
Posted by Toortle
 - August 16, 2024, 14:06:31
Lol.
Posted by Redaktion
 - August 16, 2024, 13:06:50
The first version of Geekbench AI is here, and it enables you to check the AI performance of your PC's NPU, GPU, and CPU. While Primate Labs is presenting the tool as a new benchmarking software, it's basically the Geekbench ML with a marketing-friendly term.

https://www.notebookcheck.net/New-Geekebench-AI-benchmark-lets-you-test-your-PC-s-AI-performance.876170.0.html