Recent posts
#71
Last post by MrPotatoHead - Today at 13:03:21
I'm no expert on robotics, but isn't 325k per dog a little steep? Most of these are around 100k per dog.
#72
Last post by B - Today at 13:02:26
Welches Kühlmittel soll denn verwendet werden?
#73
Last post by No DLSS-like ML upscaling - Today at 13:02:24
AAA gaming
Know that Strix Halo' has a RDNA3.5 iGPU and as such does not support hardware-based, machine learning (ML), DLSS-like, upscaling. Only RDNA4 does that and it's only available in dedicated desktop GPU cards.
AI/LLM token generation (tg)
But Strix Halo' RAM is connected to a 256-bit memory bus (equivalent to quad-channel (4 * 64-bit per RAM slot) in a desktop PC/SODIMM laptop PC) and as such has double (actually more, because it's 8000 MT/s memory, too (vs 5600 to 6200 MT/s on a desktop)) the memory bandwidth of normal desktop- and mini-PCs out there as they are all 128-bit (2*64-bit, aka dual-channel), which is necessary for the AI/LLM token generation (tg) speed.
AI/LLM prompt processing (pp)
Strix Halo' 8060S iGPU is equivalent to a RTX 4060 desktop GPU:
3dmark.com/search:
Radeon 8060S: Average score: 10022
RTX 4060 (desktop): Average score: 10398,
so the pp speed will be equivalent to that of the RTX 4060 desktop.
Man, if only this had 192 GB RAM, because then it could fit the very usable 3-bit quants of huggingface.co/unsloth/Qwen3.5-397B-A17B-GGUF or other big LLMs.
#76
And phones still won't last more than a day or two 😁
#79
Last post by Swizzy - Today at 12:32:29
Who would've thought?
#80
Last post by Will MBAir fit 27B quants - Today at 12:23:33
Will the 24 GB RAM option fit Qwen3.5-27B-UD-Q4_K_XL.gguf (17.6 GB) or Qwen3.5-27B-UD-Q5_K_XL.gguf (20.2 GB)? (huggingface.co/unsloth/Qwen3.5-27B-GGUF) (I know there's mlx-community/Qwen3.5-27B-4bit (16.1 GB) too, but I don't know if its perplexity is good)