Quote from: Alistair Karim on November 04, 2023, 12:48:20Quote from: RobertJasiek on November 04, 2023, 07:07:16OpenCL, which is 1.81 times slower than Nvidia CUDA libraries and 2.95 times slower than Nvidia TensorRT libraries. Multiply this by the factor of GFXBench's supported APIs versus a speed that should be supported and measurable for OpenCL.
Can you elaborate on that one, please? Do i understand correctly that you imply that those macbooks would underperform in general compute different from videogames? Like, all sort of neural nets or some sort of texture baking?
Apple M has some hardcoded video codecs, which result in reasonable speeds for de-, encoding or sometimes both if a) the specific task uses the specific codec, b) there is enough unified memory storage for the task and c) bandwidth does not create a bottleneck.
I do not include video transcoding in "compute" tasks, which are about machine learning (which includes all sort of neural nets), 3D rendering and other number-crunching applications.
Typically, "texture" is a graphics application and might belong to 3D gaming (we will have to await tests to see how which M3 chip performs) or 3D modeling (in which case perforance depends on the used texture algorithms).
In general GPU-like dependent compute applications, Apple M underperforms very much to extremely both in speed and efficiency. I have calculated it for the application I am familiar with: Go. See:
home.snafu.de/jasiek/AI_Computer.html#mozTocId147960
home.snafu.de/jasiek/AI_Computer.html#mozTocId752162
M3 Max 128 GB unified memory will perform a bit better (still factors worse than an RXT 4070 desktop) than M1 and M2, but the other M3 chips might perform similarly as before (or worse due to the decreased bandwidth for affected applications).
Note the relevance of libraries for proprietary GPU cores (such as Nvidia CUDA and Tensor cores). The same GPU (or, say, a neural component on an Apple chip) is a few times faster with good (e.g. Nvidia) libraries and applications using them. The TDP, die size used for such cores and their numbers give a rough first hint of what to expect from the hardware. Even then, development of such cores takes several to many years. Nvidia has long experience while AMD and Intel fail in this respect. Do not expect Apple to catch up two decades of Nvidia's hardware development quickly.
Furthermore, from now on, expect having to always buy the top Apple M chip to have at least the dream of running behind Nvidia GPUs for compute applications.
In 5, 10 or 15 years, Apple might catch up to Nvidia notebooks for compute applications, presuming software then written for Apple libraries. Until then, Apple is what it is: a company for mainstream editing usage - video, photo, audio, maybe prosumer modeling. Even then, always check if the specific task is supported. (As one should for every system.)