This is misleading.
Presenting results in performance per watt suggests a linear correlation between power and performance. As if 100 perf for 100 W is equivalent to 50 perf for 50 W.
We all know this is completely untrue. To begin with, a system which reaches 50 perf at 50 W may not be able to consume 100W, let alone produce 100 perf, so it's not "the same". More importantly, the performance per watt curve is asymptotic, not linear. In the first few watts it may be close to linear trend, but then it quickly becomes "let's consume a lot more power for a tiny bit more performance".
Showing AMD with a hugely better perf per watt suggests that, at same wattage, the AMD is hugely more powerful. But this is untrue, because both systems where NOT measured at same wattage. That's the misleading part.
What we know of AMD Zen3 is that it offers a little bit less performance than Tiger Lake for a lot less energy consumption, hence the better "efficiency". But that's mostly because they occupy different positions in the curve. When setting Tiger Lake consumption to match the Zen3 one, the differences become much smaller. At ideal wattage for Zen3, Zen3 tends to be a little more powerful than Tiger Lake, not vastly more. And at high wattage, well, Zen3 is unable to take advantage of high wattage, so Tiger Lake becomes better while Zen3 performance curve is essentially flat.
This suggest that a single "number" is a misleading representation of reality, and that a more thorough evaluation should present a curve, with different watt / perf trade off.
Now I understand that it's a lot more work, so it's easier to produce a single number. In which case, at least ensure that all systems use the same wattage (except of course those that are unable to consume that much). This would be a more "fair" representation of efficiency, even though the selection of the reference wattage is itself a criteria by which it's easy to make one system or another the "winner" of the efficiency race.