Quote from: _MT_ on April 16, 2021, 19:11:32
Another thing you need to realize is that timing is expressed in clock cycles. As frequency increases, duration of a single cycle decreases. One is inverse of the other. While the numbers might be getting bigger and bigger, actual latency (expressed in time) doesn't change much at all. Some combinations are better than others as you're dealing with whole numbers. But on a whole, it's not getting worse. It's not really getting any better either. We have hit a wall some time ago. 40 for DDR5-6400 (3.2 GHz) is like 20 for DDR4-3200 (1.6 GHz) which is like 10 for DDR3-1600 (800 MHz) and 5 for DDR2-800 (400 MHz).
This is something I often wonder about. As we move from DDR2 to 3 to 4 and now to 5, sure it's getting faster, but the latencies are also climbing. Granted, some applications perform better with higher speed and aren't affected as much by the increased latency, resulting in a net gain in performance, but some, if I'm not mistaken, are the opposite. Supposedly the overall result is more gain from the higher speeds than is lost from the increased latencies, and memory has become more efficient and dense, which I see as the biggest improvements (better battery life and larger capacity), but I wonder just how much the actual performance has increased due to this. I could be totally wrong, but I suspect much more of the speed gains we've achieved come from larger buses and things like dual/triple-channel.