Quote from: anan on September 13, 2023, 09:34:48Likely because copper cables were just "good enough" for the features that were implemented. And optics would likely require expensive additional hardware/connector.
No copper cable is "good enough" because...have a shameful limit of 1.2m maximum for TB3-5.
We urgently need optical ports from 500 Gbit/s on PC/laptops, which allow you to connect all peripherals to PCs/laptops with one cable and with long distance from main unit.
With the monstrous consumption and noise of modern extremely backward technical processes (and there will be no better way on silicon, this is a technological dead end) - there is no place for a system unit and a laptop in high load mode next to the owner who wants to sit in complete peace and quiet. At least one problem can be solved ELEMENTARY with optical cables - moving the noisy hardware outside the living/office rooms.
We have been waiting for 8k monitors (otherwise ppi above 220 on large diagonals) for many years. Filthy Google deliberately spoiled and is spoiling our vision, the entire population of the planet, who are forced to use their filthy chrome with the non-disabled buggy and cloudy black and white anti-aliasing.
But even at 8K in 2D, for even regular surfing and various professional work, we need 30-36 bit panels at 120Hz+ with a real response time of less than 7ms on B2W/G2G.
Therefore, TB5 is already obsolete (and the developers are well aware that it is outdated before it even comes out - that's why they created a desperate 120Gbps mode, which is still not enough for 8k@120Hz/30-bit 4:4:4 monitors without DSC lossy compression ) - now you need a video interface of at least 200 Gbit/s. And taking into account all other requests, this should definitely be 500 Gbit/s optics.
This brings up another key issue - as I've written many times over the years - RAM bandwidth is shamefully low on the x86 platform. Already now, RAM should pump at least 200 GB/s even on ordinary PCs/laptops, not to mention top-end ones that are required to pump 350-400 GB/s, like the Apple M2 Max.
x86 is suffocating with slow RAM, hence the shameful attempts to increase the L3 cache to huge proportions (which is still useless) and even attempts to introduce an on-chip L4 cache.
Memory controllers on x86 should long ago be at least 256-bit, and preferably 512-bit, as in the Apple M2 Max.
Slow memory severely limits external devices and interaction with processor cores and other logic on the chip. This is the Achilles heel of x86, which is only getting worse and worse every year against the backdrop of the success of competitors, primarily Apple.