Our first in-house benchmarks are in for the mobile GeForce RTX 5090 and initial impressions are a mixed bag especially from a desktop perspective.https://www.notebookcheck.net/GeForce-RTX-5090-laptops-are-a-whopping-50-percent-slower-than-the-desktop-RTX-5090.989874.0.html
Laptop 5090 is 160% slower than Desktop 5090 in v3.3 Classroom OPTIX/RTX. Judged per watt, this is still ok. Judged per dollar, it is bad. Nvidia's notebook GPU pricing is robbery.
The future of portable Gaming will be a classic notebook with Thunderbolt 5 connected to an eGPU (with desktop GPU)
That's because it is NOT a RTX 5090 (GB202). The 5090 Mobile uses the GB203 die from the desktop RTX 5080, but with 24GB of VRAM instead of 16GB.
Putting the GB202 die in a laptop would be outright psychotic. It is BIG at 750mm2 and most of its performance would be hampered by the low power profile (175W MAX vs 575W on desktop).
Quote from: punmeistervstheworld on March 30, 2025, 13:32:40Putting the GB202 die in a laptop would be outright psychotic.
Armchair engineers tend to tell us what won't work or what is "psychotic" without any real architectural or scientific insight in to the end products performance.
Quote from: punmeistervstheworld on March 30, 2025, 13:32:40That's because it is NOT a RTX 5090 (GB202). The 5090 Mobile uses the GB203 die from the desktop RTX 5080, but with 24GB of VRAM instead of 16GB.
Putting the GB202 die in a laptop would be outright psychotic. It is BIG at 750mm2 and most of its performance would be hampered by the low power profile (175W MAX vs 575W on desktop).
Exactly. mobile 5090 is 50% slower yet taking 70% less power to get there. the performance per watt is decent. price per performance? not so much.
Quote from: RobertJasiek on March 30, 2025, 11:13:53Laptop 5090 is 160% slower than Desktop 5090 in v3.3 Classroom OPTIX/RTX. Judged per watt, this is still ok. Judged per dollar, it is bad. Nvidia's notebook GPU pricing is robbery.
160% slower? How does that work?
Optix also uses Tensor cores, for which software can, besides CUDA and CuDNN libraries, also use TensorRT libraries of Nvidia. Depending how well the software uses them, the libraries are programmed and the chosen combination of libraries in their various versions work together, high speed factors become possible for some applications. I have experienced it for a particular neural net inferencing application with 2.95 times the speed of TensorRT using all GPU cores (Tensor, RT, CUDA) versus OpenCL only using CUDA cores in a basic manner.