Considering that the Pixel 4 also has a higher score in this benchmark compared to Pixel 6...I wouldn't take this benchmark too seriously as a true representation of the tensor's machine learning capabilities.
The butthurt is strong in these comments. Just because Apple invested in custom ML hardware earlier than everyone else, is it really surprising they come out on top? Apple truly does have a world-class CPU design team, and that simply isn't refutable. They've mercilessly crushed Intel, literally Chipzilla, in both absolute performance and performance per watt - with phone SoCs! This isn't surprising in the least.
Interesting. Google makes something to specifically work for it's very own phone, and it works very very well, and we already are putting it up against Apple and their more expensive phone, which doesn't matter in the slightest so you can get clicks for your advertisements. Nobody that uses Pixel gives a crap less about Apple or the iPhone, or their ecosystem. Android 12 is NOTHING like iOS and after using both, iOS sucks. Honestly. It is terrible in my opinion. Cannot stand it. Oh goody, it benchmarks fast. Whooptie doo.
Google uses a custom NPU architecture that is not open sourced to the world. Apple uses a basic NPU. This makes it very easy to get numbers about the A15's NPU but hard for Tensor. As a result Geekbench resorts to using a more generic approach to test, that is NNAPI (Neural Network Application Programming Interface), so though Tensor's TPU is based off an NPU, it still doesn't bear the same architecture which is why it will produce a score under NPU but this will not represent the TPU's actual capability. In actuality, it is hard to measure the performance of an ML chip, though some have accepted for it to be measured using FLOPS or Tera Operations per second (TOPS). At least, I can be certain that your Geekbench ML app doesn't measure using any of these.
This may be hard to understand for someone who knows very little about ML.
2 or 3 years ago, some co-workers of mine were talking about how smart Alexa is. I was like, "Whaaaa? My wife and I have a love/hate relationship with Alexa, because of how frustrating it can be to get her to do things." It turned out they were comparing Alexa to Siri on their iPhones. I said, "I must be spoiled by the Google Assistant on my Pixel 1." I later googled it and found that people said that the Google Assistant was the smartest, and Siri was the dumbest. I don't care about how fast numbers are crunched. I care about whether people's experiences with the AI has changed. Is Siri no longer the dumbest of the big three? That's what matters.
"Of course, it is a synthetic benchmark and the real world experience will be where it matters for people."
This sentence in the last paragraph contradicted the entire article. Numbers like Geekbench results don't justify the performance when it comes to AI power. If you think they do, please care to explain.
Nah I have both and just sold my iPhone. The pixel 6 is the better device and this article is flawed. There's no true comparison in this article only a bunch of words, and the machine learning has been around for the pixel platinum.
The author is clearly an Apple Fanboy. You can tell by the 'giddy excitement' in his writing.
Geekbench scores or any 'scores' of the sort mean nothing to real world users. These apps would need to understand how every single phone out there processes various sequences and program their apps accordingly to have a real comparison. Geekbench clearly doesn't do this.
If Apple is the better chip, there wouldn't be that huge of a difference between the two. But yet there is with this app.
These comparisons should be done by giving a person both phones getting them to get familiar with both. Then give them a list of things to do. The results will be based on their experiences.
Who cares about these stupid tests. I can get a developer to make an app that is biased to Android and publish the results...