News:

Willkommen im Notebookcheck.com Forum! Hier können sie über alle unsere Artikel und allgemein über Notebook relevante Dinge disuktieren. Viel Spass!

Main Menu

Apple MacBook Pro 16 2023 M3 Max im Test - Mit dem M3 Max gegen die HX-CPUs von AMD & Intel

Started by Redaktion, November 08, 2023, 21:37:54

Previous topic - Next topic

Neenyah

Quote from: A on December 29, 2023, 02:45:24If you are talking desktops, Mac Studio M2Ultra 192GB RAM is $5600. Sooo let's go again, how many 4090s you will need for 160GB VRAM.
Apple has set a hard-coded limit at 75% of physical memory amount, so 144 GB is the limit. (developer.apple.com)

x86 can use RAM if necessary for VRAM purpose, so you can have 96 GB VRAM + 256 GB RAM and get additional 128 GB from it (most iGPUs can utilize up to 50% of installed RAM) for a total of 224 GB.

RobertJasiek

Quote from: A on December 29, 2023, 02:45:24If we will start comparing laptops, picture for x86 will be very grim actually, no VRAM there at all.
So let's compare desktops to desktops, laptops to laptops? If you are talking desktops, Mac Studio M2Ultra 192GB RAM is $5600. Sooo let's go again, how many 4090s you will need for 160GB VRAM.

Fair enough. Too many for a realistic build. Nvidia shoots itself in the foot with its VRAM gimping.

QuoteYou can simply compare Geekbench 6.2 ML Metal and CUDA, information is public and no one is hiding it.
It's right there in NBC MBP review...
Geekbench 6.2
MBP Metal ML - 150000
Desktop 4090RTX OpenCL ML - 350000
There's no conspiracy to hide anything from you.

Now, comparisons have become slightly better indeed. However, OpenCL is a bad comparison as TensorRT, which suits machine learning, is 2.95 times faster. So the desktop number should be roughly 1.032.500, which is 6.9 times faster than Metal. Unsurprisingly, as we compare very different TDPs.

QuoteWas it running in the most optimal way for platform?

Thank you for joining me in this quest! Tester should state such, as I have done in my optimised tests.

Quote(spoiler - no, MLX is used instead of TensorFlow on Apple Silicon).

Ok. - Do you happen to know the typical speed factor between these two?

Quotejust because his info affirms

Rather because it is one of the very few machine learning comparisons available at all. I agree that testers should provide better information and run the most optimised tests.

Quote
Quote from: RobertJasiek on December 28, 2023, 19:46:06Tell us: why does Apple force users to click "Not Now" thousands of times instead of letting them click "Never" once? Why does Apple disable wireless OS updates just because one does not accept iCloud?
Never met this behavior myself and never heard of the issue.

You have not heard of it because you have overlooked or forgotten my earlier, repeated reports on it. Summary:

A few years ago, Apple started trying to drive also those iPadOS users to the iCloud without prior acceptance of the iCloud terms. There are traps when a user might accidentally accept them during device setup or OS update when some terms confirmation dialogues also contain iCloud terms confirmation. I have always been careful of what I accept so have always avoided these traps and not accepted the iCloud terms. For years, it had been just that: being careful during setup and updates. However, then Apple introduced different popups and settings notification asking for acceptance of the iCloud terms first ca. 6 times per day, then typically 1 or 2 times per day with the user choice between Not Now or Yes. I have continued to be careful and always clicked Not Now. Anybody who is slightly less careful than me would have accepted the iCloud terms accidentally (if not voluntarily to use iCloud) and therefore not see any the frequent PopUps.

Before the popups, iPadOS could be updated via WLAN or LAN. Since the popups, Apple has started to permanently punish users for rejected the iCloud terms by disabling iPadOS updates via WLAN, which - since then - has mandatory acceptance of the iCloud terms. In my phone call to Apple, a supervising manager has confirmed that this is the intended behaviour now.

Needless to say, as I have explained before, the iCloud terms violate EU and German laws, each case of Apple or third parties applying the iCloud terms violates the German criminal code with charges of years of prison, and my acceptance of the iCloud terms would violate EU and German data / privacy laws. (So even if I wanted to use iCloud, still the laws would prescribe my rejecting of its terms.)

Quoteyou haven't logged in using Apple ID at all?

Quite contrarily, iCloud term acceptance requests are also assigned to my iPad's Apple ID settings etc. (Needless to say, they also appear on the main screen and the lock screen.)

A

Quote from: RobertJasiek on December 29, 2023, 09:05:51Do you happen to know the typical speed factor between these two?
Nah

Quote from: RobertJasiek on December 29, 2023, 09:05:51Apple started trying to drive also those iPadOS users to the iCloud without prior acceptance of the iCloud terms.
Just accept them? You can disable iCloud afterwards.

RobertJasiek

Quote from: RobertJasiek on December 29, 2023, 09:05:51acceptance of the iCloud terms.
Just accept them? You can disable iCloud afterwards.
[/quote]

Accepting the iCloud terms is not an option for me because
- I do not collude with Apple to its intended criminal activities
- I do not violate the laws intentionally
- I do not agree to theft and abuse of my data and privacy
- I do not want to use any non-German cloud (not protected by German law) for even just a very short time
- I do not want to express agreement to such a cloud
- I do not want to express agreement to such terms

I do not give in just out of convenience. Ethics and laws have very much greater priority. It is Apple's duty to comply by the laws even if it shows bad ethics. It is not my duty to accept Apple above the laws.


A

Quote from: Neenyah on December 29, 2023, 03:05:31Apple has set a hard-coded limit at 75% of physical memory amount, so 144 GB is the limit.[/url]
I know. It's not really 75%, limit differs based on RAM size - I don't really know/remember the exact actual limit for 192Gb so I've used what I think is a reasonable amount.

Quote from: Neenyah on December 29, 2023, 03:05:31most iGPUs can utilize up to 50% of installed RAM
We aren't talking about x86 iGPUs, they don't have performance.

Quote from: RobertJasiek on December 28, 2023, 19:46:06Go AI fills 64GB RAM in at least ca. 2.5h on an RTX 4070 while ca. 0.8GB VRAM is the level at every moment.
So I just remembered the similar pattern of resource usage - in chess engines you will see the similar allocation for CPU search algorithm with a small neural network for move evaluation on GPU.

A

P.S. There are also ways to adjust RAM/VRAM split on Apple Silicon, so you can use whatever ratio you wish. OS responsiveness isn't guaranteed of course if you go to bold on that.

github.com/ggerganov/llama.cpp/discussions/2182

A

P.S.2 Well page position haven't saved in link, it's down there below in comments.

Don't look at the code, all you need is a single Terminal command
sudo sysctl iogpu.wired_limit_mb=<mb>
Easy

A

RobertJasiek, btw is this your KataGo and some other Go NNs running on iPhone/iPad?

apps.apple.com/us/app/a-master-of-go/id1442035374?platform=iphone

RobertJasiek

According to

www.lifein19x19.com/viewtopic.php?p=238921#p238921

it relies on an AlphaGo Zero type engine with ELF OpenGo weight. This is probably stronger than humans (with sufficient runtime) but - I think but I am unsure - is not, and much weaker than, KataGo.

Unfrotunately, I do not know by heart which Apple device softwares use KataGo. Maybe this if its description is true:

katago goGame ai

apps.apple.com/us/app/katago-gogame-ai/id1509047602

KaTrain is also available for Mac but its GUI is very basic (might be good enough for your trials though) and I cannot know if it is as buggy as for Windows:

github.com/sanderland/katrain/releases

KaTrain runs local, I do not know whether the other softwares are local or just access to (slow) webservices.

AI KataGo Go

apps.apple.com/de/app/ai-katago-go/id1560986333

See also

senseis.xmp.net/?KataGo
senseis.xmp.net/?GoPlayingPrograms
senseis.xmp.net/?ComputerGo

For editing but not (much) for playing:

senseis.xmp.net/?GoEditingPrograms

If you want to run speed tests, learn about KataGo and its tuning by starting from my webpage and then following the various links:

home.snafu.de/jasiek/AI_Computer.html


RobertJasiek

KataGo is developed by David Wu aka lightvector.

GUIs are developed by various people.

I have heard that KaTrain / Mac does not show visits/s in the GUI. KaTrain / Mac might cost something. It was said that it is the only local GUI for KataGo but this may have  changed without my knowledge.

I do not know if speed tests for KataGo are possible on a Mac command line. They are possible on Linux or Windows.

A

Quote from: RobertJasiek on December 30, 2023, 13:44:56it relies on an AlphaGo Zero type engine with ELF OpenGo weight. This is probably stronger than humans (with sufficient runtime) but - I think but I am unsure - is not, and much weaker than, KataGo.
That forum post is from 2018. App description says
"It is a Go AI featuring a neural network of that strong, open source AI. Leela Zero, ELF OpenGo, PhoenixGo, Minigo, SAI, and optionally KataGo!"

So I presume your AI is seemingly something one can inference even on iPhone.

Quote from: RobertJasiek on December 30, 2023, 13:44:56If you want to run speed tests
Nah, not interested, all those Zero game AIs use outdated theory and are not computationally expensive by modern terms. I see they've added Apple Silicon support this May and seem to be still working on it, good for them.

Quote from: RobertJasiek on December 30, 2023, 13:44:56earn about KataGo and its tuning by starting from my webpage
I don't need instructions, I can build it from source.


A

"With M2 processor of iPad Pro, the playouts are performed over 250 times per second with KataGo b18c384nbt"
There's your speed for iPad btw.

RobertJasiek

You are an optimist if you think that you do not need a manual. Compiling stuff might be the easy part. To use KataGo with a GUI, such as KaTrain, it needs to be told a command (or parameters) with which to call KataGo.

A b18 net is a good choice. There are older and new versions of nets.

250 playouts (related to visits) per second of M2 is between iPhone 13 pro and iPad Pro M1, not fast enough to reliably read ladders, so occassionally weak amateur moves occur, and 1/26 the speed of RTX 4070 Desktop.

Since you dismiss AlphaZero theory as outdated, you do not understand the nature of Go AI theory and complexity. The point is that application of nets scales well for quite some computation time: much more inference runtime gives much better results.

It would make sense to execute inferenced KataGo on many A100s. Just nobody wants to pay for such luxury.

Once more: not my AI. I use it but have not developed it.

A

Quote from: RobertJasiek on December 30, 2023, 15:57:55250 playouts (related to visits) per second of M2 is between iPhone 13 pro and iPad Pro M1, not fast enough to reliably read ladders, so occassionally weak amateur moves occur, and 1/26 the speed of RTX 4070 Desktop.
Whatever, I just wanted to make sure the AI you are constantly referring to can be ran on phones today.

Quote from: RobertJasiek on December 30, 2023, 15:57:55Since you dismiss AlphaZero theory as outdated, you do not understand the nature of Go AI theory and complexity
There are a lot of outdated theories that are still computationally expensive. Go is expensive because it has way more move combinations than chess. But NN itself isn't, because NN isn't _calculating_ anything. It's _predicting_ the outcome making a higher number or lower-complexity calculations with better parallelization than code that's actually trying to _calculate_ it. So in the end NN complexity is LOWER than if you try to really math out or bruteforce the result (almost infinite complexity).

Quote from: RobertJasiek on December 30, 2023, 15:57:55It would make sense to execute inferenced KataGo on many A100s
You could, but makes zero sense. Whatever it's doing on CPU will be the bottleneck, if it's like Leela Zero.

Quick Reply

Warning: this topic has not been posted in for at least 120 days.
Unless you're sure you want to reply, please consider starting a new topic.

Name:
Email:
Verification:
Please leave this box empty:

Shortcuts: ALT+S post or ALT+P preview