w.r.t. the article: Sure, RTX2080Ti Desktop with 11GB of its own VRAM, outputting to a monitor using its own video outputs is not as affected by PCIe3x8. However, we've already seen 5500XT 4GB is affected by it, sometimes severely. Furthermore, most laptops with Nvidia dGPUs use Optimus (AMD does something similar nowadays), which feeds every single frame back over the PCie link, further impacting the limited bandwidth.
@_MT_ it seems the monolithic APU does have a bit of a slight performance edge in some cases vs the 3700/3800 systems, inspite of the much smaller L3 cache. For me, it's having the IGP for my VM host - since only Intel supports GPU sharing on consumer HW, AMD and Nvidia only support it on their pro cards. This means one of the two GPUs have to be integrated onto the CPU. I could chose a weaker Intel system or a more powerful AMD... hard choice /s. My system is ideally mITX, so I don't quite have the space for 2 GPUs.
Of course, neither company's approach is ideal, IMO.
AMD gives their desktop platform 20 lanes from the CPU (more from the chipset), but starves their upper range APUs (lower range gets the same layout, but fare better against Intel U series).
Intel gives all of their CPUs an IGP and 16 lanes, but what's good for an H series laptop is not always the greatest for a desktop. This isn't even speaking of the cheapness of Intel, where their U/Y series laptops have always gotten 4 lanes from the CPU (the 10, 12, and 16 lane configurations all come from the chipset - NO EXCEPTIONS). Intel makes an even greater mess by allowing OEMs to permanently downclock the 4 PCIe3 lanes between the CPU and chipset to PCIe2 speeds, further starving what little bandwidth there is.
In the end, no decent full stack approach, IMO.