If you plan to run agentic workflows on this using the state of the art (SOTA) LLM that is Qwen3.6-35B-A3B-UD-Q4_K_M, know that the 32 GB RAM won't be enough:
Quote from: reddit.com/r/LocalLLaMA/comments/1sq94qx/is_anyone_getting_real_coding_work_done_with.. I've come to the conclusion that (1) 32768 is the biggest context I can get away with in an adequately smart model, and (2) it just ain't enough.
(Here is a good (V)RAM requirement calc: huggingface.co/spaces/oobabooga/accurate-gguf-vram-calculator (paste e.g. this into its "GGUF Model URL" field: huggingface.co/unsloth/Qwen3.6-35B-A3B-GGUF/blob/main/Qwen3.6-35B-A3B-UD-Q4_K_M.gguf).)