Quote from: william blake on April 18, 2020, 16:29:18
Quote from: Valantar on April 18, 2020, 11:52:26
That's true, there are very few competitors trying to offer a service of entirely uncomparable benchmarks.
there are no other places on the web that offer such a service and interface. not even close. this is the best place for comprehensive testing and comparison of components. the only one.
And? The service is useless as the data is too low quality to have any real value, and the interface only serves as an attempt at hiding this due to its focus on averaged scores and undefined and meaningless metrics like "effective speed" presented as some sort of performance summary with zero explanation. Do you actually think the average user is able to glean more meaningful information from that site than reading the ranking and average bench numbers? 'Cause they won't.
As for the issue with averaged scores, it's the same problem that all crowd-sourced data has: you get access to lots of potentially useful data, but there is
zero quality control in the creation/gathering of said data, which leads to
GIGO. Sure, you can look up aberrant scores and they tend to have annotations about high background loads or other issues,
but they still count towards the average which dramatically degrades the quality of the test data and the conclusions/summaries/averages, making them fundamentally unreliable.
If you want any kind of comparable benchmark it needs to be done in a controlled manner in a controlled setting; otherwise the data gathered will vary due to outside variables which invalidates any comparison.
Beyond that, you have fundamental issues such as not publishing the testing methodology for their own performance reviews (at least for game benchmarks, generally just seeming to publish "CPU name" + "GPU name" with memory speed added on and no further explanations), which again means that there is
massive potential for variance due to other factors. There might of course not be, but by not publishing testing methodologies the data becomes automatically suspect. Publishing your methodologies in detail so that they can be scrutinized is another fundamental requirement of proper benchmarking which UB fails to do.
This is also why there are no other sites providing the same type of data: because doing so in a reliable manner is
impossible due to the sheer scale of such an undertaking. Which is why reviews of pre-built units or individual components is
far better a measure of performance. Sure, you don't get easily parsed "average bench" numbers, but you get numbers that are actually comparable in a reliable manner.
Quote from: william blake on April 18, 2020, 16:29:18Quote from: Valantar on April 18, 2020, 11:52:26
Remember, UB aggregates scores while entirely disregarding the configuration of the PCs in question (cooling, chassis, RAM, etc.),
what the heck is that? name one who are not "disregarding". i'll wait..
p.s
Sure. UL/3DMark. Anyone doing testing of individual components. Anyone doing system testing with SPEC, SysMark or other broad test suites while publishing detailed system specs and methodologies. The TechPowerUp GPU Database. AnandTech Bench. Do you need more?
To clarify:
- 3DMark shows a ton of data, but doesn't aggregate or average scores for individual components, thus bypassing the issue of needing to factor in the rest of the system in calculating this. The only aggregate score it shows is how your specific system (or the test result you are viewing) stacks up against all other systems benchmarked and a selection of fictitious "systems" representing generalized performance levels/categories without representing any one component or system configuration.
- Individual component testing or system testing with a complex suite gives detailed and multifaceted representations of performance either of that specific component (which is thus comparable to all other reviews on the same site using the same test system) or of the system in question (which is then comparable to all other systems tested by the same tester in the same manner). Most importantly, none of this testing claims to present universally comparable numbers across different systems with uncontrolled variables.
- The TPU GPU database has a ranking of GPU performance, but rather than being based on an unfiltered and uncontrolled heap of data from random users this consists of reliably produced test data in a single test system (as far back as is applicable, including re-testing of old GPUs every time the test system is updated) done by professional testers in a test suite of 20+ games. Numbers in the graphs are also always based on reference or reference-spec GPUs at the same settings, and external variables like CPU performance etc. are controlled for.
-AnandTech Bench is comparable to the TPU GPUDB, just covering more product categories and presented in a more granular fashion rather than presentations of singular GPUs (and of course it doesn't also serve as a specification database). Again, comparable products are tested on the same system and using the same test methodologies, ensuring comparable results. Test methodologies are also published in (almost too much) detail, with high-level yet still understandable explanations.
While neither of these present the broad and sweeping overviews of UB, that is precisely why they are infinitely more reliable - UB's crowdsourcing of data is precisely the reason why their numbers are unreliable.
Quote from: william blake on April 18, 2020, 16:29:18
just a reminder for you
single core/all core userbenchmark cpu numbers are very similar to cinebench r15. and its a tiny fraction of userbenchmark can do.
Yes, and? Have I presented Cinebench as some sort of alternative? That's one single CPU performance metric. It's somewhat useful for quick off-the-cuff CPU comparisons (at least within heavily threaded and/or rendering-like workloads) but other than that it's not the best benchmark. Nor does it have any sort of performance comparison function beyond the few "canned" results included in the install.
Quote from: william blake on April 18, 2020, 16:29:18..
you have no reasons against userbenchmark test numbers, only emotions.
Wow, good one! Please see above. I'd also
love to see an example of where my previous arguments have been based on emotions. Please?
Quote from: william blake on April 18, 2020, 16:29:18yes, userbenchmark owner is an intel fan and their "bench" score is made up bs, but thats it.
Oh, that's it? The owner of a benchmarking service - a type of service which has
one purpose: providing reliable comparisons - is biased against one of the vendors tested by their product (and towards the other), and they present an overall performance metric (which is what >99% of people refer to on their site) that even
you admit is BS. Yeah, no, you're right, that's nothing at all, of course. How silly of me.
Quote from: william blake on April 18, 2020, 16:29:18Of course how much is that, 2% of total info provided? i can live with it, i have no choice. till the birth of similar place to compare pc and components.
You really ought to find some hardware review sites you like and start reading reviews done by professional reviewers (an initial recommendation: stay away from YouTube (though GamersNexus is an exception there), and look to sites like AnandTech, TPU, TechSpot, Hexus, etc.). Databases provide too little detail to represent performance in a sufficient way for proper comparisons.
Quote from: william blake on April 18, 2020, 16:29:18or you want me watch a fanatic battle between you and userbenchmark? ok then.
I have absolutely no idea what you're rambling about at this point. Feel free to clarify.