Multi-Token Prediction (MTP) for Qwen on LLaMA.cpp + TurboQuant
Mirrored from r/LocalLLaMA for archival readability. Support the source by reading on the original site.
| Implemented Multi-Token Prediction for QWEN on LLaMA.cpp with TurboQuant. +40% performance! 90% acceptance rate. Running locally on a MacBook Pro M5 Max 64GB RAM. Outputs: Patched LLaMA.cpp with MTP and TurboQuant: https://github.com/AtomicBot-ai/atomic-llama-cpp-turboquant Quantized Qwen 3.6 27B (and 35B) into GGUF with MTP: https://huggingface.co/collections/AtomicChat/qwen-36-udt-mtp Local Ai Models App: Atomic.Chat [link] [comments] |
Discussion (0)
Sign in to join the discussion. Free account, 30 seconds — email code or GitHub.
Sign in →No comments yet. Sign in and be the first to say something.