r/LocalLLaMA · · 1 min read

Multi-Token Prediction (MTP) for Qwen on LLaMA.cpp + TurboQuant

Mirrored from r/LocalLLaMA for archival readability. Support the source by reading on the original site.

Multi-Token Prediction (MTP) for Qwen on LLaMA.cpp + TurboQuant

Implemented Multi-Token Prediction for QWEN on LLaMA.cpp with TurboQuant.

+40% performance! 90% acceptance rate.

Running locally on a MacBook Pro M5 Max 64GB RAM.

Outputs:
LLaMA.cpp + TurboQuant: 21 tokens/s
LLaMA.cpp + TurboQuant + MTP: 34 tokens/s

Patched LLaMA.cpp with MTP and TurboQuant: https://github.com/AtomicBot-ai/atomic-llama-cpp-turboquant

Quantized Qwen 3.6 27B (and 35B) into GGUF with MTP: https://huggingface.co/collections/AtomicChat/qwen-36-udt-mtp

Local Ai Models App: Atomic.Chat

submitted by /u/gladkos
[link] [comments]

Discussion (0)

Sign in to join the discussion. Free account, 30 seconds — email code or GitHub.

Sign in →

No comments yet. Sign in and be the first to say something.

More from r/LocalLLaMA