News:

Willkommen im Notebookcheck.com Forum! Hier können sie über alle unsere Artikel und allgemein über Notebook relevante Dinge disuktieren. Viel Spass!

Main Menu

Strix Point APU handheld performance simulation: Ryzen AI 9 HX 370 shines in gaming even when tested at just 17 W

Started by Redaktion, August 01, 2024, 22:15:30

Previous topic - Next topic

Redaktion

AMD's Zen5-based Strix Point APUs have proven to be quite efficient which could prove to be a huge asset if and when the Strix APUs start to appear in handheld game consoles. To see what kind of a handheld performance the Strix APUs could bring, one YouTuber has put the Ryzen AI 9 HX 370 through its paces in a bunch of games at just 17 W.

https://www.notebookcheck.net/Strix-Point-APU-handheld-performance-simulation-Ryzen-AI-9-HX-370-shines-in-gaming-even-when-tested-at-just-17-W.870436.0.html

Nameless

This is what I believe should happen...

Put this in a handheld console at lower wattage. Then allow this console to be "docked" and allow it to then run at full power.

Would be an absolute game changer.


NikoB (old name bl)

For any dynamic game, especially multiplayer. except for strategies, any fps below 60 in "high" quality can be considered a priori unplayable process.

And again I will repeat once again - the main problem of igpu is slow RAM, which is also used by the system, drivers and software, in addition to igpu.

Until the SoC crystal has dedicated VRAM with a bandwidth of more than 300 GB / s, there will be no serious progress in igpu, no matter how many shader units they add, as there will be no serious improvement in multi-threaded performance in real software when increasing the processor cores for exactly the same reason.

The bandwidth of x86 RAM lags behind the hardware requirements by several times, which is why Intel / AMD are desperately increasing processor caches that are meaningless under serious loads. This is a poultice for a dead man.

Until there is a sharp improvement with the x86 memory subsystem, progress will be at a snail's pace against the background of dGPU. Just like Apple, where theoretical 400 GB/s is declared in top configs, and in reality a shameful 120 GB/s in Max versions, which gives a completely shameful efficiency of the memory controller.

With real 400 GB/s, Apple's igpu chips would have confidently caught up with 4060+ long ago, but in reality the performance is much worse, which once again proves where the real bottleneck of all igpu is.

SomebodyYouUsedToKnow

Quote from: NikoB (old name bl) on August 02, 2024, 13:32:17For any dynamic game, especially multiplayer. except for strategies, any fps below 60 in "high" quality can be considered a priori unplayable process.

And again I will repeat once again - the main problem of igpu is slow RAM, which is also used by the system, drivers and software, in addition to igpu.

Until the SoC crystal has dedicated VRAM with a bandwidth of more than 300 GB / s, there will be no serious progress in igpu, no matter how many shader units they add, as there will be no serious improvement in multi-threaded performance in real software when increasing the processor cores for exactly the same reason.

The bandwidth of x86 RAM lags behind the hardware requirements by several times, which is why Intel / AMD are desperately increasing processor caches that are meaningless under serious loads. This is a poultice for a dead man.

Until there is a sharp improvement with the x86 memory subsystem, progress will be at a snail's pace against the background of dGPU. Just like Apple, where theoretical 400 GB/s is declared in top configs, and in reality a shameful 120 GB/s in Max versions, which gives a completely shameful efficiency of the memory controller.

With real 400 GB/s, Apple's igpu chips would have confidently caught up with 4060+ long ago, but in reality the performance is much worse, which once again proves where the real bottleneck of all igpu is.

Not everyone can, should, it wants to use a dGPU as they tend to be power hungry and require dedicated cooling. And 30fps isn't actually that terrible for most games - it's playable if not amazing.

Real NikoB

Quote from: SomebodyYouUsedToKnow on August 03, 2024, 05:22:07And 30fps isn't actually that terrible for most games - it's playable if not amazing.
Complete nonsense, except for the strategies. I can't imagine how you can play something dynamic at 30fps, when even the video clearly shows problems with too low frame rates...

Minimum 60 in high quality, everything below goes under the bulldozer if the goal is dynamic games.

dGPUs consume a lot for a simple reason - NVidia, as a cheater like Intel a few years ago, was unable to develop a new architecture to provide real progress in performance without increasing consumption. This proves that NVidia developers have lost their creativity and that technical processes are approaching their end for silicon, and the curve of performance growth per 1W is becoming more and more flat and therefore boring in terms of consumers' desire to buy new gadgets in terms of performance growth.

That is why the laptop industry, to its shame, has reached shameful consumption levels equal to those once desktop - 250-300W, completely inadequate for even 18" laptop cases.

A "gaming" laptop should not consume more than 120W (as in old ones many years ago) in total at full load on the processor and dgpu, and a regular laptop should not consume more than 40. Or better yet, no more than 80W. But then NVidia/Intel/AMD, having fallen into such strict consumption limits, simply will not be able to provide a consistent and, most importantly, significant increase in performance from the previous series of chips to the new series. Cheating with consumption has led Intel/NVidia to a dead end. Moreover, Intel was the first to fall into its own trap...

Hotz

Quote from: Real NikoB on August 03, 2024, 11:11:07the laptop industry ... has reached shameful consumption levels equal to those once desktop - 250-300W, completely inadequate

A "gaming" laptop should not consume more than 120W (as in old ones many years ago) in total ..., and a regular laptop should not consume more than 40. But then NVidia/Intel/AMD ... simply will not be able to provide a consistent ... increase in performance from the previous series of chips to the new series. Cheating with consumption...

Agreed. I would like that. These limits you mentioned. Then and only then we would see real improvements from generation to generation. And I bet they would be quite small. They are in fact cheating with more and more power consumption.

Ahmed360

Man!
I am loving these PC Handhelds so damn much!

As a long-time huge Handhelds fan (since Gameboy Color), it's a dream come true to all this power within our hands.

Thank you AMD for taking this market segment seriously.

Quick Reply

Warning: this topic has not been posted in for at least 120 days.
Unless you're sure you want to reply, please consider starting a new topic.

Name:
Email:
Verification:
Please leave this box empty:

Shortcuts: ALT+S post or ALT+P preview