Google uses a custom NPU architecture that is not open sourced to the world. Apple uses a basic NPU. This makes it very easy to get numbers about the A15's NPU but hard for Tensor. As a result Geekbench resorts to using a more generic approach to test, that is NNAPI (Neural Network Application Programming Interface), so though Tensor's TPU is based off an NPU, it still doesn't bear the same architecture which is why it will produce a score under NPU but this will not represent the TPU's actual capability. In actuality, it is hard to measure the performance of an ML chip, though some have accepted for it to be measured using FLOPS or Tera Operations per second (TOPS). At least, I can be certain that your Geekbench ML app doesn't measure using any of these.
This may be hard to understand for someone who knows very little about ML.