r/LocalLLaMA · · 1 min read

Qwen 3.6 27B: IQ3XXS KV Q8 vs Q4XL KV Q4 (262K context)

Mirrored from r/LocalLLaMA for archival readability. Support the source by reading on the original site.

hey yall. So I have a 24GB gpu. What do you think is better? I am using unsloth quants. Both are UD quants. I need 262K context for my hermes agent and use case. Both setups fit perfectly in vram.

I have heard that Qwen 3.6 27B is quite good even with Q4 KV.

I am using LM studio so I need need to use V and K at the same value or else CPU usage goes much higher.

submitted by /u/My_Unbiased_Opinion
[link] [comments]

Discussion (0)

Sign in to join the discussion. Free account, 30 seconds — email code or GitHub.

Sign in →

No comments yet. Sign in and be the first to say something.

More from r/LocalLLaMA