As always when chasing the ever higher frame sizes and frame rates, there's little to no mention of how high(objectively) parameters are needed for native support.
As an example - HDMI 2.1, heralded as the future for GPUs for last few weeks -barely caught up to displays like oled 4k TVs - 120fps 4k is the top for HDMI 2.1. 8k is native only for 30fps.
If you'll go higher than 4k/144 or above 40-ish FPS(assuming VRR working this way; otherwise over 30fps) at 8k, there's compression applied, that with such content (high paced, with no time for more complex algorithms to fix artifacts or latency on games will increase) may result in worse experience.
NVidia DSLL is upscaling that shows its flaws (and always will, there's no magic algorithm that would make out exact details from blurry pixels - if there was one, NASA wouldn't spend the fortune for Hubble telescope, would just snap photos with iPhone and then upscale indefinitely). In Linus clip, there was shown(but not mentioned by him) light artifacting/flickering in DLSS example, not seen in native renders.
When it comes to Graphics RAM - single final 8k frame is 33MPix. 8bit RGB colour(so not HDR) makes it a 100MB still. To make such thing, GPU requires much more memory to store textures for a lot of different objects, often duplicated for more streamlined processing. If single face is i.e. a measly 1024x1024 pixels, it's already a 3MB - with ever higher draw distance (and i.e. 50 characters in FOV) it's up to 150MB in GDDR for faces alone.
If current games (that aimed at 4k tops) can eat up 11GB of GDDR(from my experience), optimal solution for 8K is actually 4 times as much -even if ingesting textures directly from SSD will be implemented.