Quote from: william blake on April 04, 2020, 13:18:09
Quote from: Valantar on April 04, 2020, 10:36:57
Ah, the classic cherry-picking of benchmarks that characterizes someone who desperately wants their factually wrong opinions to be right
very serious question, are you high?
what i said was: speed/productivity per frequency is equal between amd and intel for an average user.
exactly you, not me, desperately wants chery-picking to prove me wrong, and i have no idea why.
show me some average pc program package which measures ipc, i dont mind.
It's quite impressive that you don't seem to remember what you wrote six posts further up this thread. Let me remind you:
Quote from: william blake on April 03, 2020, 18:56:32
we should not care about spec since we have browsing, games and photoshop to measure our cpu speed. please and thank you.
This is in
no way equal to saying "speed/productivity per frequency is equal between amd and intel for an average user." You are saying "we don't need a proper benchmark, we should use these three tests that I have chosen, that is enough." It is also worth noting that you switch between saying "IPC" and "CPU speed" - these two are not the same! CPU speed is (roughly) IPC multiplied by actual CPU frequency, at least for single-threaded loads.
Before the above quote, you said
Quote from: william blake on April 03, 2020, 13:12:56
pretty safe and fair to say ipc is equal. some amd advantage if latencies does not matter, some intel advantage if they matters.
which I then responded to by providing you with test data demonstrating that this is untrue. IPC between Zen 2 and Coffee Lake (and other Skylake-based chips) is not equal, but on average about 7% in favor of AMD according to an industry-standard benchmark. Yes, of course there are scenarios where Intel has better IPC (such as 462.libquantum in AT's testing), as different workloads stress different parts of the core, which means different architectures will perform differently. A different benchmark selection would obviously also present different results. This is why a properly developed benchmark application runs a wide range of different workloads, to ensure that as many scenarios and types of load as possible are represented - and that these loads are based on real-world examples, with the choice of benchmark not being done by any person or small group of persons but by a wide selection of people from across the computing industry.
Now, you could of course argue that the workloads in which Intel is better are more important (though I haven't seen you actually present any arguments for this), but that doesn't change the fact that overall Intel needs much higher frequencies to beat AMD's chip - which, again, tells us that AMD's IPC is better.
As for cherry-picking: you do understand what that term means, right? I am referring to an thoroughly documented industry-standard benchmark which includes 18 different tests. You, on the other hand, said
Quote from: william blake on April 03, 2020, 18:56:32
we should not care about spec since we have browsing, games and photoshop to measure our cpu speed. please and thank you.
In other words: My preferred test has 18 workloads, yours has three. Mine is an established and well documented industry standard developed by an industry-wide organization, yours is seemingly pulled out of thin air based on being workloads where Intel historically does well. Now who is cherry-picking?
"Browsing, games and Photoshop" does not tell us anything about IPC unless you both broaden the test selection and heavily refine the testing criteria (for example, "Photoshop" is not a single workload, but hundreds of different ones with variable threading, cache/RAM utilization, etc. Which one(s) do you pick? Why?). There are both too few tests and too many variables in your choice for that to be possible (especially in games, which are generally more reliant on GPU performance). Browsing is also highly bursty, which means that to use such a workload to determine IPC you would need to disable all boost modes of the CPU, which would again negate Intel's advantage of higher boost clocks, which is largely why Intel tends to win browsing benchmarks. Remember, IPC means Instructions
Per Clock, i.e. it compares how many operations a CPU core is capable of processing for each clock cycle. Clocking higher with lower IPC can still be faster overall - which Intel demonstrates in some workloads. In fact,
the 9900K in Anandtech's SPEC testing wins 14 out of 18 tests - but it does so by running at much higher clocks than the 3900X it's compared to. The scores divided by their respective clock speeds then tell us that the AMD chip is, in general, processing more work per clock than the Intel chip. That doesn't mean that the AMD chip is faster in this case - in fact it's only faster in four out of 18 tests! - but it does have higher IPC. That is beyond question.
How this extrapolates into real-world performance is of course both dependent on the actual clocks of the chips people are using and how many threads the applications in question actually use. Both of which has extreme variability. But in general, we know that Zen 2 consumes less power than Coffee Lake (and now Comet Lake) while performing at the same level, partly due to its slightly higher IPC and partly due to its process advantage. AMD also has better SMT than Intel, and generally gives you more cores/threads for the same money. Intel on the other hand can boost quite a lot higher (a 9900K with good cooling can run at a steady 5GHz, while a 3900X or 3700X will generally not exceed 4.3GHz in heavily threaded loads. Whether these properties gives the ultimate performance advantage to Intel or AMD depends on the application in question. But none of this brings into question whether AMD has better IPC - it does. Period.
As for "some average PC program package which measures IPC" - sorry, there's no such thing. It doesn't exist. SPEC is the closest you'll get, as measuring IPC (and not just absolute performance) requires a controlled test environment which most users won't be able to provide. Of course you can roughly measure IPC on your own by having two computers you want to compare, manually running them at the same clocks (how fast or slow is irrelevant, as you're not measuring absolute performance and IPC-measuring workloads should scale linearly with frequency, so something reasonable like 3GHz all-core is perfectly fine), and then running your desired benchmarks on each. Of course you also need to stick to single-core workloads unless you want multi-core scaling, the Windows Scheduler, SMT, and other uncontrollable variables to foul up your results. Ideally you would also need to compile your own applications to negate any compiler-based advantages (such as benchmarking an AMD chip with an application complied with an Intel-optimized compiler, or the other way around). Another possible method is letting frequencies vary, but logging them continuously across the run and then normalizing the score for the average frequency over each benchmark run - though this would require advanced logging software and some time-consuming excel work to produce anything that can be called comparable IPC numbers.