r/LocalLLaMA · · 1 min read

club-5060ti: practical RTX 5060 Ti local LLM notes and configs

Mirrored from r/LocalLLaMA for archival readability. Support the source by reading on the original site.

club-5060ti: practical RTX 5060 Ti local LLM notes and configs

I put together a small public repo for RTX 5060 Ti 16GB local LLM setups:

I took inspiration from the club-3090 repo, but this one is focused on documenting what we’ve actually tested on 5060 Ti hardware so the setup details are easier to share and reproduce.

Current seed setup is 2x RTX 5060 Ti 16GB on Linux, with notes for:

- vLLM serving Qwen3.6 27B NVFP4/MTP

- llama.cpp MTP GGUF serving for Qwen3.6 27B Q4/Q6

- Q6 long-context fit checks, including a 204800 direct long-context preset

- a safer 65536 llama.cpp router preset for extra headroom

- initial Qwen3.6 35B A3B checks on llama.cpp and vLLM

- sanitized launch examples

- model download and llama.cpp update helper scripts

- simple OpenAI-compatible smoke/bench scripts

- CSV seed results and report templates

The aim is to keep it practical: exact configs, versions, context lengths, KV settings, and caveats rather than vague tokens/sec claims.

If anyone else is testing similar 5060 Ti setups, feel free to open an issue or PR with enough detail to reproduce the result.

submitted by /u/do_u_think_im_spooky
[link] [comments]

Discussion (0)

Sign in to join the discussion. Free account, 30 seconds — email code or GitHub.

Sign in →

No comments yet. Sign in and be the first to say something.

More from r/LocalLLaMA