News:

Willkommen im Notebookcheck.com Forum! Hier können sie über alle unsere Artikel und allgemein über Notebook relevante Dinge disuktieren. Viel Spass!

Main Menu

Apple MacBook Pro 14 2023 M3 Max Review - The fastest CPU in a 14-inch laptop

Started by Redaktion, November 21, 2023, 00:15:27

Previous topic - Next topic

RobertJasiek

Let me summarise some findings.

If RAM, VRAM or Unified Memory are too small, execution is impossible or too slow as expected.

Large language models are an example where VRAM or Unified Memory must not be too small. If x64 has too little VRAM and M3 Max has enough Unified Memory, the latter is much faster. x64 with enough VRAM can be as expensive and then is significantly faster. Efficiency needs evaluation. M3 Max fans generate noise and high frequency coil whine but dB values are missing. x64 loudness depends on computer models; unsually but not necessarily louder.

Usually, image AI, 3D rendering and video rendering are roughly 4 ~ 5 times faster on 4090 desktop than on M3 Max. There can be exceptions. Efficiency needs evaluation. Usually, M3 Max has low noise but x64 with similar speeds can also be configured to have low noise and x64 with intermediate speeds can be configured to have not quite low but still acceptable noise.

Software (such as machine learning abstract game tools) using CUDA and especially Tensor cores well are very much faster and much more efficient per TDP but on much higher absolute TDPs on x64 than M3 Max for same prices.

Software parameters and other configurations play a great role.

It is crucially important to get the right device for the right job. A wrong device is over-priced, slow or does not work at all.

A

Quote from: RobertJasiek on November 27, 2023, 11:10:03If x64 has too little VRAM
Which is always the case of laptop x86 GPU except for the lightest of models. Apple Silicon can work with big (normal) models fast, x86 laptops basically can't.

Quote from: RobertJasiek on November 27, 2023, 11:10:03M3 Max fans generate noise and high frequency coil whine
Fans barely audible, 1950 RPM. Coil whine is there but it's more funny than loud.

Quote from: RobertJasiek on November 27, 2023, 11:10:03image AI
Same limitations with VRAM on x86, can do only only small image generations. I've done a huge 4K/8K upscales on Apple Silicon.

Quote from: RobertJasiek on November 27, 2023, 11:10:03x64 with similar speeds can also be configured to have low noise
If you do it, it'll just immediately dip below Apple Silicon Geekbench performance much like it does every time when you pull AC plug.

Quote from: RobertJasiek on November 27, 2023, 11:10:033D rendering
Yes.

Quote from: RobertJasiek on November 27, 2023, 11:10:03video rendering
On par. M will beat 4090 in many tasks e.g. export. In even more tasks if x86 is unplugged from AC.

Quote from: RobertJasiek on November 27, 2023, 11:10:03It is crucially important to get the right device for the right job.
Today only 2 types of productivity really require x86 - games and 3d rendering. With latter being a so-so category - it's fun to run BMW benchmark taking several seconds and all, but in reality 4090 is not enough and after your projects start taking 10+ mins to render (which is like 90% of actual projects) - they immediately switch to external render farms. No one's doing serious 3d rendering on laptop. Remember in professional 3d world even software licenses are like $2K/year.

So of you cherry pick benchmarks you can make either of these two look bad or very bad. Reality is they are roughly on par with Apple Silicon being more of a LAPTOP and x86 being a PORTABLE DESKTOP, like that $4300 "laptop" the other guy advertised me yesterday, with 4hr web surfing battery life.

A

Just an example of plugged/unplugged x86

3DMark Wild Life Extreme
M2 Max 38C (M2 not M3) unplugged - 150.8FPS

MSI Z16P 3080Ti - 119.6FPS
MSI Z16P 3080Ti unplugged - 27.5FPS

MSI GT77HX 4090 - 259.5FPS
MSI GT77HX 4090 unplugged - 64.2FPS


RobertJasiek

Quote from: A on November 27, 2023, 11:59:35Which is always the case of laptop x86 GPU except for the lightest of models.

Can we please speak of x64 (computers with 64 bit CPUs)? The days of x86 (computers with 32 bit CPUs) are long gone.

I guess what you want to say is: "except for the lightest of machine learning language models".

Because the comparatively small amount of VRAM of x64 notebooks with sufficient RAM works fine for many tasks. There are well-known exceptions, of course, and Nvidia tries to upsell to 4090 (desktop or laptop).

QuoteApple Silicon can work

Not Apple Silicon per se but Apple Silicon with a sufficiently large amount of Unified Memory.

QuoteFans barely audible

There have also been different reports.

QuoteSame limitations with VRAM on x86, can do only only small image generations.

Not on x64 computers per se, but on those with insufficient VRAM. (Among which there are too many notebooks indeed.)

QuoteM will beat 4090 in many tasks e.g. export.

Vice versa for many more video tasks, according to reviews I have seen.

Quoteunplugged from AC.

You can emphasise the power plug but reality is: also MBP need it soon under high load.

QuoteToday only 2 types of productivity really require x86 - games and 3d rendering.

If these are the only such types you know, educate yourself! In fact, I have told you countless times that my used software of a third type requires a Nvidia GPU to get acceptable speeds. Every other hardware is just hopelessly slow for that purpose.

QuoteNo one's doing serious 3d rendering on laptop.

Some do it. I guess their objects are small enough to make it feasible.

QuoteRemember in professional 3d world even software licenses are like $2K/year.

Sure, if speak about the big boys.


A

Quote from: RobertJasiek on November 27, 2023, 13:21:19Can we please speak of x64 (computers with 64 bit CPUs)? The days of x86 (computers with 32 bit CPUs) are long gone.
x86 is a family of architectures, and is the correct name. x86-64 is just a subset of it. One can freely use both names.

Quote from: RobertJasiek on November 27, 2023, 13:21:19Not Apple Silicon per se but Apple Silicon with a sufficiently large amount of Unified Memory.
Base M3 Max are enough for 70B models, x86 laptop sellers will never tell you 8/16/24Gb will not be capable of AI tasks or rendering big scenes. Basically you will not be able to find x86 with enough VRAM

Quote from: RobertJasiek on November 27, 2023, 13:21:19You can emphasise the power plug but reality is: also MBP need it soon under high load.
Reality is you will need AC plug MUCH sooner on x86 under any load lower than full. Unless you've forgot 16 inch M3 Max has 15hrs wifi websurfing. And keep in mind x86 is MUCH crippled in performance on battery, so "full load" is different.

Quote from: RobertJasiek on November 27, 2023, 13:21:19If these are the only such types you know, educate yourself! In fact, I have told you countless times that my used software of a third type requires a Nvidia GPU to get acceptable speeds. Every other hardware is just hopelessly slow for that purpose.
Be more specific, what software exactly is it, educate me.


A

Quote from: RobertJasiek on November 27, 2023, 13:21:19Vice versa for many more video tasks, according to reviews I have seen.
Confirmation bias. All you guys usually do is watching x86-specific reviews with Cinebench as the main benchmark, zero testing on battery etc. ) Just like that guy yesterday who simply claimed 7945HX3D is 40% faster than M3Max out of thin air )

Plum

@A

Re games: e.g. watch?v=F7QDC5j6KM0&t=2s

QuoteReality is you will need AC plug MUCH sooner on x86 under any load lower than full. Unless you've forgot 16 inch M3 Max has 15hrs wifi websurfing.

Do you really get such run times out of your Macbook? Unfortunately I wasn't even able to get a full 8 hours workday out of 16 inch M1 Pro, using only productivity (web) apps such as Google Meet, GMail, Hubspot, Slack, Jira, Discord, Telegram etc.

I am typically running dozens of tabs at the same time for productivity reasons and would always run very low on battery after only 4-5 hours.

So while I agree that Macbooks are more efficient, and run longer on battery, unfortunately the full workday on battery remains a dream for me.

Re AI use cases: I don't know much about this topic but I am curious: In the grand scheme of things, is Apple a relevant player in that area? I was under the impression that Nvidia is spearheading there and that AI was one of the reasons why their stock price skyrocketed recently.

RobertJasiek

Quote from: A on November 27, 2023, 13:54:52Base M3 Max are enough for 70B models

And with 200B language models, you throw away your MBP.

Quotex86 laptop sellers will never tell you 8/16/24Gb will not be capable of AI tasks

Because it is wrong. It always depends on what AI task, the AI model configuration and the usage.

E.g., as I have told before, 1GB VRAM is too much for the AI task I use.

QuoteBasically you will not be able to find x86 with enough VRAM

You mean: a) notebooks; b) only those AI tasks needing more VRAM.

QuoteReality is you will need AC plug MUCH sooner on x86 under any load lower than full.

We are comparing to M3 Max and not to M3 Pro, right? Windows notebooks deliver up to 17h for Wifi use.

QuoteUnless you've forgot 16 inch M3 Max has 15hrs wifi websurfing.

See. Some Windows notebooks offer LONGER times.

QuoteAnd keep in mind x86 is MUCH crippled in performance on battery

This is the current rumour spread by NBC writers but IIRC it is wrong. More likely, BY FAR THE MOST x64 notebooks are crippled in performance on battery.

Quotewhat software exactly is it, educate me.

KataGo for TensorRT together with the necessary Nvidia GPU libraries and a GUI. I.e., a Go playing / analysing software.

RobertJasiek

Quote from: A on November 27, 2023, 14:04:43Confirmation bias.

Probably.

QuoteAll you guys usually do is watching x86-specific reviews with Cinebench as the main benchmark,

Of course not.

RobertJasiek

Quote from: Plum on November 27, 2023, 14:45:36Re AI use cases: I don't know much about this topic but I am curious: In the grand scheme of things, is Apple a relevant player in that area? I was under the impression that Nvidia is spearheading there and that AI was one of the reasons why their stock price skyrocketed recently.

Hardware: It is mostly Nvidia GPUs, AMD CPUs, maybe also IBM mainframes, Microsoft for servers, Amazon for servers.

Software: Currently, Apple is a small fish. The big fish are Google (who bought DeepMind) and some other big companies but I forgot which of them (might be Microsoft, but unsure). Then there are the specialist AI companies, especially those for language or image models, until bought up.

A

Quote from: RobertJasiek on November 27, 2023, 15:07:34And with 200B language models, you throw away your MBP.
Even GPT4 is 16 111B models with MoE, so it's not going to happen anytime soon. )

Quote from: RobertJasiek on November 27, 2023, 15:07:34Because it is wrong. It always depends on what AI task, the AI model configuration and the usage. E.g., as I have told before, 1GB VRAM is too much for the AI task I use.
You can't cheat math and 70B LLM is 'smarter' than 33B, 33B model is smarter than 15B, 15B model is smarter than 7B. So you are limited to worst of them with x86 laptops VRAM.

Quote from: RobertJasiek on November 27, 2023, 15:07:34You mean: a) notebooks; b) only those AI tasks needing more VRAM.
All of the modern NNs require a lot of VRAM today.

Quote from: RobertJasiek on November 27, 2023, 15:07:34We are comparing to M3 Max and not to M3 Pro, right? Windows notebooks deliver up to 17h for Wifi use.
Quote from: RobertJasiek on November 27, 2023, 15:07:34See. Some Windows notebooks offer LONGER times.
*at crippled performance
Are they comparable in performance to M Max or they are more in Macbook Air league?

Quote from: RobertJasiek on November 27, 2023, 15:07:34KataGo
Should run on Apple Silicon, would be interesting to compare.

Quote from: RobertJasiek on November 27, 2023, 15:10:13Of course not.
Yeah, people here didn't know about their GPU limitations, people didn't know performance tests on x86 are being ran at one level of performance and battery life tested at another, sometimes 4 times worse, people didn't know Cinebench is a benchmark based on Intel code with years, if not decades, of hand-optimization for x86, Blender still has like half of Apple Silicon support items not done in their git issue, etc etc...
Clearly no one ever pulled their nose out of x86 reviews. Well, at least three years later guys agreed Intel is sh*t.

A

Quote from: Plum on November 27, 2023, 14:45:36Do you really get such run times out of your Macbook? Unfortunately I wasn't even able to get a full 8 hours workday out of 16 inch M1 Pro, using only productivity (web) apps such as Google Meet, GMail, Hubspot, Slack, Jira, Discord, Telegram etc.
Because it's "web surfing". NBC's tests are quite standardized:
QuoteWi-Fi mode: the possible battery life while surfing the Internet via Wi-Fi with medium brightness (~150 cd/m²) and power-saving options ("balanced" mode) switched on. We measure the runtime by letting the device run an automatic script (HTML 5, JavaScript, no Flash - update 03.05.2015 v1.3), which picks a mix of websites and switches between them every 30 seconds.
This is how they get 15 hours. It's far from daily usage but as it's standard for all laptops you can see how they perform COMPARED to each other. This doesn't mean you will be always getting 15 hrs, you will be getting something between Wifi Websurfing and Full Load times.

Quote from: Plum on November 27, 2023, 14:45:36I am typically running dozens of tabs at the same time for productivity reasons and would always run very low on battery after only 4-5 hours.
Install Stats app, it has System Total power metric. You will be able to monitor your usage and get your run time by dividing 100Wh (if you are running 16 inch) by your avg consumption.

Right now I have 1000 tabs open (not a joke), 2 heavy IDEs - PyTorch, Rider, Slack, Discord, Telegram, Mail SourceTree, DBeaver, 33B model loaded locally in llama.cpp (idle, but i was heavily using it couple hours before) and couple other apps - my _average consumption for today_ is 14.15W, it will give me 7.5 hours of battery life with this workload.

Quote from: Plum on November 27, 2023, 14:45:36So while I agree that Macbooks are more efficient, and run longer on battery, unfortunately the full workday on battery remains a dream for me.
Stats app will help you find the culprit. I know a guy who is working in Photoshop and his M1Pro lasts for 9hrs. I think in your workload you will be surprised to see display brightness is the most part of power consumption. )

A

Quote from: Plum on November 27, 2023, 14:45:36I don't know much about this topic but I am curious: In the grand scheme of things, is Apple a relevant player in that area? I was under the impression that Nvidia is spearheading there and that AI was one of the reasons why their stock price skyrocketed recently.
The only thing Apple is using AI right now on macbooks is ocr and object recognition in your images while macbook is asleep. Seemingly on ALL of them after Sonoma dropped, not only photo library. Of course it's all just to make Spotlight searches faster and your data isn't going anywhere and not used for ad targeting (wink wink).

Officially it's only for Spotlight.

Quick Reply

Warning: this topic has not been posted in for at least 120 days.
Unless you're sure you want to reply, please consider starting a new topic.

Name:
Email:
Verification:
Please leave this box empty:

Shortcuts: ALT+S post or ALT+P preview