News:

Willkommen im Notebookcheck.com Forum! Hier können sie über alle unsere Artikel und allgemein über Notebook relevante Dinge disuktieren. Viel Spass!

Main Menu

Apple MacBook Pro 16 2023 M3 Max im Test - Mit dem M3 Max gegen die HX-CPUs von AMD & Intel

Started by Redaktion, November 08, 2023, 21:37:54

Previous topic - Next topic

RobertJasiek

Daniel Bourke / Youtube

m.youtube.com/watch?v=cpYqED1q6ro

hat maschinelles Lernen für PyTorch und TensorFlow u.A. auf 14" Apple M3 Max 30-Kerne mit Nvidia RTX Titan verglichen, wobei laut Balkendiagrammen Nvidia ca. 2x ~ 15x schneller ist. Laut

www.notebookcheck.com/KFA2-GeForce-RTX-4090-SG-im-Test-Kompromissloses-4K-Gaming-mit-dem-neuen-Nvidia-Flaggschiff.670864.0.html

ist RTX 4090 Desktop im Vergleich zur RTX Titan anhand von Time Spy Graphics dann nochmals 2,4x schneller. Daher ergibt sich je nach Anwendungstest, dass Nvidia um die Faktoren ca. 4,8x ~ 36x schneller als M3 Max ist.

NikoB

Quote from: RobertJasiek on December 25, 2023, 17:52:45Daniel Bourke / Youtube

m.youtube.com/watch?v=cpYqED1q6ro

hat maschinelles Lernen für PyTorch und TensorFlow u.A. auf 14" Apple M3 Max 30-Kerne mit Nvidia RTX Titan verglichen, wobei laut Balkendiagrammen Nvidia ca. 2x ~ 15x schneller ist. Laut

www.notebookcheck.com/KFA2-GeForce-RTX-4090-SG-im-Test-Kompromissloses-4K-Gaming-mit-dem-neuen-Nvidia-Flaggschiff.670864.0.html

ist RTX 4090 Desktop im Vergleich zur RTX Titan anhand von Time Spy Graphics dann nochmals 2,4x schneller. Daher ergibt sich je nach Anwendungstest, dass Nvidia um die Faktoren ca. 4,8x ~ 36x schneller als M3 Max ist.

One problem Robert - the video card does not have terabytes of RAM and you can only run test networks (toy networks) on them, and not serious expert-level systems. Those. all these tests are meaningless when it comes to something really working and useful...

And combining them into a network is also useless, because... the thickness of communication channels will be a natural limitation as the level of interaction complexity increases.

What is the point of having many cores if they are suffocated by slow memory, i.e. connections between them? The limitation of slow memory is one of the particular consequences of Amdahl's law.

How does our brain get out of this situation? And it bio-machine has trillions of connections, packed into 1.5 kg of gray matter, but extremely slow connections. And it must be said that our brain, in reality, is a very imperfect biological machine (and quickly degrades with age, precisely at the moment when, it would seem, valuable experience and knowledge have been accumulated, which is especially sad for humanity). The brain remembers only "sketches" of information, and not all the information it receives through the senses, and everything that flashes through a person's consciousness is quickly lost. Like a breeze. Only compact abstract associations remain, which are also fragmentarily lost over time.Just try to remember (down to the smallest detail) the faces of your closest relatives, family members with whom you see every day and for a long time. I bet no one will succeed. This is the simplest case of what is required of a superbrain.

In order to advance into the future, humanity no longer has enough of a human brain (even if these brains are united in large collectives - this is also a human dead end), it needs a perfect associative machine, but which does not forget anything and remembers everything down to the smallest detail, continuously accumulating knowledge and continuously becoming more complex in associative connections between them. It is not even possible to imagine such a machine yet. There are so many volumes of information and a level of complexity tied to the overall energy consumption of such a machine. But let them try...maybe they will create one such superintelligence the size of a planet, although the speed of light, in itself, becomes another fundamental limitation for superintelligence...Do you see what I'm getting at? To create a true superbrain, you will first have to overcome a fundamental limitation in the speed of light.

This is a fundamental problem with the level of travel to other galaxies within the current human lifetime. If the lifetime is made infinite, then the problem is "solved". But this is not what we want, is it, regarding the previously set goal of quickly moving to another galaxy? Although, again, the term "quickly" loses its meaning if you are immortal. But according to current ideas, the Universe is not immortal, you know? Those. in order to solve the problem of the immortality of man as a mind, one will first have to solve the problem of the immortality of the Universe in which he lives and the laws of which he obeys, i.e. go beyond the laws of the Universe..


RobertJasiek

NikoB, I have explained to you before that specialised AI (example: Go) does not need tremendous storage to reach superhuman level. Go training works distributed over the internet even on low bandwidth.

NikoB

Quote from: RobertJasiek on December 27, 2023, 22:02:08NikoB, I have explained to you before that specialised AI (example: Go) does not need tremendous storage to reach superhuman level. Go training works distributed over the internet even on low bandwidth.
What practical problems can this neural network solve? What benefits can it provide for scientific and technological progress?

RobertJasiek

- Finding the likely best next move in a Go position.
- Positional judgement of a Go position.
- Enabling (e.g. my current) Go-related research.
- Potentially contributing to Go-related technology (such as automatic cheating detection if related research is greatly improved some time).

NikoB

I can only repeat the question again - what practical benefits does this neural network have for humanity? Besides having fun with Go?

RobertJasiek

Simply speaking, none. As I have said, it is a specialised AI, which is mostly good for "fun". (Some other specialised AIs for, say, finding new substances are more relevant for humanity and need much better hardware.)

A

Quote from: RobertJasiek on December 25, 2023, 17:52:45Daniel Bourke / Youtube
Data is cherry picked by "RTX can run it" criteria? There's lot of modern stuff RTX simply can't run at all, compared to MBP - just due to VRAM limitation - or you will need several RTX cards (pricey). So basically it doesn't matter how much faster RTX is if "model size" is probably more important than "model speed" today. You are biased and obsessed with downplaying Apple for some reason, idk, i see you all the time in macbook discussions despite you've never tried or owned none. That's some quality analytics.

Quote from: NikoB on December 27, 2023, 21:27:11serious expert-level systems
You are using nameless "serious expert-level systems" now only because you got wrecked on this claim:
Quote from: NikoB on November 24, 2023, 18:47:56No modern PC ... is capable of performing GPT3
))) So a month later after you got wrecked on that claim you are more cautious and just using secret "expert systems" you've made up. Name five of those "systems"? )

Overall, nice to see conversation between two AI experts of your caliber guys, one not knowing GPT3.5-sized LLMs can run locally, the other one not knowing what "16 bit" is in AI models. Keep on.

NikoB

Quote from: RobertJasiek on December 28, 2023, 11:27:05and need much better hardware
Actually, this is where I started a long time ago - modern neural networks with a level of real benefit are incompatible with mass consumer equipment. Server only. Therefore, all these networks, launched by enthusiasts on ordinary PCs/laptops, are, a priori, useless and toy. Their only useful purpose is to train students to work with real complex neural networks, nothing more. Well, roughly how Pascal was (and is) used to teach algorithmic (and partially object) programming at the entry level. The difference is that you can actually write something complex in Pascal on ordinary hardware. In neural networks launched on ordinary PCs/laptops, nothing complicated can be created with the current mathematical apparatus. Someday in the future, when the capabilities of mass-produced hardware increase by another 1000-10000 times, this will become possible. This is how the ground is being prepared for the next qualitative leap in technology and society. Gradual accumulation of a technological base sufficient for a massive phase shift.

A

Quote from: NikoB on December 28, 2023, 14:46:04modern neural networks with a level of real benefit are incompatible with mass consumer equipment. Server only. Therefore, all these networks, launched by enthusiasts on ordinary PCs/laptops, are, a priori, useless and toy.
False

Quote from: NikoB on December 28, 2023, 14:46:04Their only useful purpose is to train students to work with real complex neural networks, nothing more.
False

Quote from: NikoB on December 28, 2023, 14:46:04Someday in the future, when the capabilities of mass-produced hardware increase by another 1000-10000 times, this will become possible
False

Another quality AI analytics from NikoB.

RobertJasiek

Quote from: A on December 28, 2023, 14:40:08Data is cherry picked by "RTX can run it" criteria?

No.

Rather, most applications were shown because they run on both MPB and x64/Nvidia.

QuoteThere's lot of modern stuff RTX simply can't run at all, compared to MBP - just due to VRAM limitation - or you will need several RTX cards (pricey).

Yes.

And some applications need a fully configured MBP - so pricey that for the same price one can buy 4 or 8 RTX 4070s or better. Then we have more VRAM and more can run.

What you do not say: Many modern softwares are CUDA-/Tensor-core-based, which MBPs (or AMD-GPUs) cannot run.

QuoteSo basically it doesn't matter how much faster RTX is if "model size" is probably more important than "model speed" today.

Same story for MBP not having CUDA- / Tensor-cores.

QuoteYou are biased

Bias is the wrong word.

I point out when information is provided that most hide to upplay Apple but that reveals relevant differences to non-Apple hardware etc.

Quoteand obsessed with downplaying Apple for some reason,

Downplaying is the wrong word. First of all, I want to inform when reality checks reveal how - often extremely - exaggerated Apple hardware has been. Like emphasising video software with roughly 1:1 speed compared to x64 while typical testers fail to mention other applications with 1/20 or 1/40 speed.

The internet is full of Apple hardware praises because many are paid or fanboys/-girls.

It may be hard for you and Apple that there are people like me who reveal often missing information on underwhelming Apple hardware performance. Reality is hard.

Obsessed with Apple's performance issues - yes. Apple gives me enough motivation each day by its harm on, for, and related to my Apple device. To decrease my motivation, Apple needs to become reasonable. First hints to Apple: stop unwanted iCloud ads, stop punishing me for rejecting them, make battery replacement prices fair, stop disinforming the world by hiding or lying about speeds of non-"Apple-mainstream" softwares.

Quotei see you all the time in macbook discussions despite you've never tried or owned none.

See - even non-owners can inform!

QuoteThat's some quality analytics.

Thanks;)

Quotethe other one not knowing what "16 bit" is in AI models.

That's because a) bit numbers are immaterial for those AIs I use and b) you do not inform us so that you can continue to boast our related missing knowledge.


RobertJasiek

Quote from: NikoB on December 28, 2023, 14:46:04Someday in the future, when the capabilities of mass-produced hardware increase by another 1000-10000 times, this will become possible. This is how the ground is being prepared for the next qualitative leap in technology and society.

But - evil can also prevail. E.g., then everybody might create his own chemical or biological weapon of mass destruction. AI finds some new stuff and informs how to use it. Mankind also needs to improve its ethics to cope with super-fast computers for everybody.

A

Quote from: RobertJasiek on December 28, 2023, 17:48:18And some applications need a fully configured MBP - so pricey
More pricey than 2-3-4-8 RTX cards? How many will you require to get 160Gb VRAM?

Quote from: RobertJasiek on December 28, 2023, 17:48:18I point out when information is provided that most hide to upplay Apple but that reveals relevant differences to non-Apple hardware etc.
You are the one hiding facts actually, only when I've pushed you agreed MBP can run more models than RTX. Your initial post doesn't mention it.

Quote from: RobertJasiek on December 28, 2023, 17:48:18Obsessed with Apple's performance issues - yes. Apple gives me enough motivation each day by its harm on, for, and related to my Apple device. To decrease my motivation, Apple needs to become reasonable. First hints to Apple: stop unwanted iCloud ads, stop punishing me for rejecting them, make battery replacement prices fair, stop disinforming the world by hiding or lying about speeds of non-"Apple-mainstream" softwares.
Try to enjoy life.
Quote from: RobertJasiek on December 28, 2023, 17:48:18It may be hard for you and Apple that there are people like me who reveal often missing information on underwhelming Apple hardware performance. Reality is hard.
All this is sad to read. Try to enjoy life.
And no, it's not hard for me. I was thinking you are interested in reality, but you are on a personal vendetta. You are not "revealing" anything, you just posted a public video, man. And hid the important bit you knew. Well, noted, but now it's just sad and uninteresting.

Quote from: RobertJasiek on December 28, 2023, 17:48:18you do not inform us
Or just don't get into discussions if you don't know the basics.

RobertJasiek

Quote from: A on December 28, 2023, 18:38:21More pricey than 2-3-4-8 RTX cards? How many will you require to get 160Gb VRAM?

160Gb VRAM = 20GB VRAM = 1 RTX 4090. Ok, probably your typo.

160GB VRAM. Hm, you cannot even get a MBP with that much storage, so what are you asking?

Suppose a MBP with 96GB storage, of which - I guess optimistically in your favour - up to 90GB can be used for GPU access. Such a MBP costs, e.g., €8500+. This means we are in a PC with 4 * RTX 4090 price territory and VRAM size.

QuoteYou are the one hiding facts actually, only when I've pushed you agreed MBP can run more models than RTX.

1) For the specific AI kinds of LLMs and generative images, you have - if you have informed us correctly - educated us that there are model size that can run on a MBP with, say, 96GB but cannot run on a single consumer RTX with up to 24GB.

2) If you allow us to use a PC with a few consumer GPUs, then both MBP and RTX have enough storage of a usable kind (VRAM of the RTXs).

3) In particular for other AI kinds, it depends on the kind on which hardware it can run. CUDA-dependent AI cannot run on MBPs. Metal-dependent AI cannot run on RTX. (Same for AMD.)

QuoteYour initial post doesn't mention it.

Thanks for reminding us that meanwhile I could learn something (presuming you have informed us correctly).

QuoteTry to enjoy life.

Uh, yes. For that purpose, while I awaited GPUs during the Corona / mining years, I considered every possibility including using Apple M to run Go AI so as to enjoy my life. At that time, my level of information was the usual benchmarks on NBC, in Youtube videos etc. with the rough suggestion of equality of speed or at least of efficiency. With such insufficient information, I might have made the mistake of buying an Apple device for Go AI and would have found that it would not have enabled me to enjoy my life because Apple M is orders of magnitude too slow for Go AI or many other AIs better run with CUDA / TensorRT. Now that I know better, I inform people to make similarly wise hardware and library choices for such softwares so that they can also enjoy their lives well.

QuoteI was thinking you are interested in reality,

I am.

Quotepersonal vendetta.

No more than Apple's performance against its critics. Tell us: why does Apple force users to click "Not Now" thousands of times instead of letting them click "Never" once? Why does Apple disable wireless OS updates just because one does not accept iCloud? Such is malicious intention to harm those of its critics that do not dispose of their Apple devices.

Quoteyou just posted a public video

And when I posted the same kind of information based on my own investigation, you said something similar. You always downplay findings revealing the truth about slow Apple hardware for certain classes of softwares.

QuoteOr just don't get into discussions if you don't know the basics.

The software basis about bits are:
- a PICe bus has a bit width for accessing VRAM
- a dGPU has a bit width
- an iGPU has a bit width
- a mainboard has a bit width for accessing RAM
- unified memory has a bit width of its mainboard
- the GPU of an Apple M SoC has a bit width
- the NPU of an Apple M SoC has a bit width
- the NPU of a new Intel SoC has a bit width
- some kinds of objects can be composed in units of bits and have a particular bit size, which, for some tasks, can be relevant for execution speed

You have left it ambiguous which of these bit values you mean. Therefore, it is not my mistake not to know which you mean.

If you should mean a value of objects of AI models of the image / LLM kinds, I do not know details. In particular, I do not know which bit size requires how much storage to run a related model. I do know such things reasonably well for Go AIs because I use them and observe RAM and VRAM usage - but I do not accuse you now to know such on your own. Everybody has limits on time for education. Then, mutual explanations can help. Go AI fills 64GB RAM in at least ca. 2.5h on an RTX 4070 while ca. 0.8GB VRAM is the level at every moment.

A

Quote from: RobertJasiek on December 28, 2023, 19:46:06160GB VRAM. Hm, you cannot even get a MBP with that much storage, so what are you asking?
Quote from: RobertJasiek on December 28, 2023, 19:46:06This means we are in a PC with 4 * RTX 4090 price territory and VRAM size.
If we will start comparing laptops, picture for x86 will be very grim actually, no VRAM there at all.
So let's compare desktops to desktops, laptops to laptops? If you are talking desktops, Mac Studio M2Ultra 192GB RAM is $5600. Sooo let's go again, how many 4090s you will need for 160GB VRAM.

Quote from: RobertJasiek on December 28, 2023, 19:46:06With such insufficient information, I might have made the mistake of buying an Apple device for Go AI and would have found that it would not have enabled me to enjoy my life because Apple M is orders of magnitude too slow for Go AI or many other AIs better run with CUDA / TensorRT
You can simply compare Geekbench 6.2 ML Metal and CUDA, information is public and no one is hiding it.
It's right there in NBC MBP review...
Geekbench 6.2
MBP Metal ML - 150000
Desktop 4090RTX OpenCL ML - 350000
There's no conspiracy to hide anything from you.

Quote from: RobertJasiek on December 28, 2023, 19:46:06You always downplay findings revealing the truth about slow Apple hardware for certain classes of softwares.
Yeah, about downplaying. You can't just drop a random video and claim "2 to 15 times faster". I can start asking questions you will have no answers for. Do you understand what exactly and how exactly he was running? Was it running in the most optimal way for platform? (spoiler - no, MLX is used instead of TensorFlow on Apple Silicon). Which platform-specific PyTorch optimizations was he using?
You will have no answer, you simply trusted random guy with camera to make decisions for you, just because his info affirms your personal vendetta. That's sad.

Quote from: RobertJasiek on December 28, 2023, 19:46:06Tell us: why does Apple force users to click "Not Now" thousands of times instead of letting them click "Never" once? Why does Apple disable wireless OS updates just because one does not accept iCloud? Such is malicious intention to harm those of its critics that do not dispose of their Apple devices.
Never met this behavior myself and never heard of the issue. Is it about iCloud or you haven't logged in using Apple ID at all?

Quote from: RobertJasiek on December 28, 2023, 19:46:06You have left it ambiguous which of these bit values you mean.
There's only one meaning when you are talking about a model.

Quote from: RobertJasiek on December 28, 2023, 19:46:06Go AI fills 64GB RAM in at least ca. 2.5h
GPU doesn't "see" RAM, so whatever it's doing there - it's on CPU.

Quick Reply

Warning: this topic has not been posted in for at least 120 days.
Unless you're sure you want to reply, please consider starting a new topic.

Name:
Email:
Verification:
Please leave this box empty:

Shortcuts: ALT+S post or ALT+P preview