club-5060ti: practical RTX 5060 Ti local LLM notes and configs
Mirrored from r/LocalLLaMA for archival readability. Support the source by reading on the original site.
| I put together a small public repo for RTX 5060 Ti 16GB local LLM setups: I took inspiration from the club-3090 repo, but this one is focused on documenting what we’ve actually tested on 5060 Ti hardware so the setup details are easier to share and reproduce. Current seed setup is 2x RTX 5060 Ti 16GB on Linux, with notes for: - vLLM serving Qwen3.6 27B NVFP4/MTP - llama.cpp MTP GGUF serving for Qwen3.6 27B Q4/Q6 - Q6 long-context fit checks, including a 204800 direct long-context preset - a safer 65536 llama.cpp router preset for extra headroom - initial Qwen3.6 35B A3B checks on llama.cpp and vLLM - sanitized launch examples - model download and llama.cpp update helper scripts - simple OpenAI-compatible smoke/bench scripts - CSV seed results and report templates The aim is to keep it practical: exact configs, versions, context lengths, KV settings, and caveats rather than vague tokens/sec claims. If anyone else is testing similar 5060 Ti setups, feel free to open an issue or PR with enough detail to reproduce the result. [link] [comments] |
More from r/LocalLLaMA
-
MiniMax M2.7 ultra uncensored heretic is Out Now with 4/100 Refusals, Available in Safetensors and GGUFs Formats!
May 15
-
Need a second pair of eyes, this Qwen3.6 27B quant recipe consistently thinks less and is correct
May 15
-
RDNA3 Flash Attention fix just dropped by llama.cpp b9158
May 15
-
I trained Qwen3.5 to jailbreak itself with RL, then used the failures to improve its defenses
May 14
Discussion (0)
Sign in to join the discussion. Free account, 30 seconds — email code or GitHub.
Sign in →No comments yet. Sign in and be the first to say something.