News:

Willkommen im Notebookcheck.com Forum! Hier können sie über alle unsere Artikel und allgemein über Notebook relevante Dinge disuktieren. Viel Spass!

Main Menu

AMD Radeon 890M performs on par with entry-level Ampere dGPU in Geekbench

Started by Redaktion, July 15, 2024, 18:51:10

Previous topic - Next topic

Redaktion

AMD's flagship iGPU, the Radeon 890M, has shown up on Geekbench. It performs on par with a Turing-based Nvidia dGPU in Vulkan. The benchmark essentially confirms AMD's claims of a 32% performance gap between RDNA 3 and RDNA 3.5

https://www.notebookcheck.net/AMD-Radeon-890M-performs-on-par-with-entry-level-Ampere-dGPU-in-Geekbench.863297.0.html

Liam_best

My next ultrabook will definitely be an AMD on BlackFriday. I have it increasingly clear regardless of what Intel does because after the continuous problems that the 13gen and 14gen have had, I will not make the same mistake again.

LittleMore


Hellish.Mike

Restricted to 15w, this is extracting more than 30% performance, still at 4nm, so the improvements are only architectural.

Impressive.

Hotz

Quote from: LittleMore on July 15, 2024, 20:34:53GPU with 1/3 more shaders surprisingly has 1/3 more performance.

True that.

Although I recall lots of people in forums (actual AMD users) claiming that technically can't be much improvement from the 780m as it was already at the bandwidth limit. Looks like they were wrong...

NikoB

AMD is already a disgrace compared to Intel. It's high time for them to introduce a HBM3 memory with 512-1024-bit data bus inside chiplet in SoC with VRAM ranging in size from 8 to 32GB. Then everything will change when the iGPU gets memory bandwidth of 500GB/s and higher.

Then it will make sense to have more shaders and other blocks. And now, what's the point, with the shameful and super slow DDR5/LPDDR5?

Hotz

Quote from: LittleMore on July 15, 2024, 20:34:53GPU with 1/3 more shaders surprisingly has 1/3 more performance.

Almost forgot... actually it is surprising:


From 760m (8 CUs) to 780m (12 CUs) - 50% more shaders - the performance was only 10-15% better.

And now from 780m (12 CUs) to 890m (16 CUs) - 33% more shaders - the performance is fully 33% better.


So it's a bit strange, no??


Z5_Cache

Quote from: Hotz on July 17, 2024, 15:01:58So it's a bit strange, no??

Most of the gains are coming from the bigger cache in zen 5 compared to Zen 4 APUs. So no, not really. The people saying these things are heavily bandwidth starved are still correct.

It would not surprise me at all if the lower 12 CU strix isn't much slower than than full 16 CU config, but I guess we'll see when the reviews show..

Anyways, none of this really matters. The bigger issue with AMD for the past 4 generations remains supply and availability. We will see how this changes (if at all) this generation.

Invot

I can't wait to see this chip inside a mini PC build. I feel like playing mordern games on something roughly the size of an old fashioned CD ROM drive is awe inspiring for an old and grey nerd like me.

NikoB

Quote from: Invot on July 18, 2024, 04:48:56I feel like playing mordern games on something roughly the size of an old fashioned CD ROM drive is awe inspiring for an old and grey nerd like me.
To put it briefly, you cannot step into the same river twice.

Plants

Quote from: NikoB on July 16, 2024, 23:19:56It's high time for them to introduce a HBM3 memory with 512-1024-bit data bus inside chiplet in SoC with VRAM ranging in size from 8 to 32GB. Then everything will change when the iGPU gets memory bandwidth of 500GB/s and higher.

Then it will make sense to have more shaders and other blocks. And now, what's the point, with the shameful and super slow DDR5/LPDDR5?

Aren't Apple pretty much doing that with on-chip unified memory? It's how their ARM computers can even be used for LLMs (for much cheaper than buying a PC with a 4090). I was wondering why nothing like that exists in the "Windows" realm.

A

Quote from: Plants on July 28, 2024, 20:50:03Aren't Apple pretty much doing that with on-chip unified memory? It's how their ARM computers can even be used for LLMs (for much cheaper than buying a PC with a 4090). I was wondering why nothing like that exists in the "Windows" realm.

Because there really isn't much practical use for it as the gains are marginal other than AI. Such things do exist for servers though

Quick Reply

Warning: this topic has not been posted in for at least 120 days.
Unless you're sure you want to reply, please consider starting a new topic.

Name:
Email:
Verification:
Please leave this box empty:

Shortcuts: ALT+S post or ALT+P preview