Orthrus: Memory-Efficient Parallel Token Generation via Dual-View Diffusion [R]
Mirrored from r/MachineLearning for archival readability. Support the source by reading on the original site.
|
Idea: Inject a trainable diffusion attention module into each layer of a frozen AR Transformer. Both heads share one KV cache. Diffusion head projects K=32 tokens in parallel; AR head verifies in a second pass and accepts the longest matching prefix. Output distribution is provably identical to the base model. Results:
Limitations: strictly bounded by the frozen base model (inherits its biases, hallucinations, knowledge gaps); Qwen3-only evaluation; greedy + rejection sampling only. [link] [comments] |
More from r/MachineLearning
-
Notes from evaluating a customer support chat agent system: heuristic evaluators give false signal, retrieval bugs masquerade as LLM failures, and the cost/quality Pareto frontier is rarely where you think [D]
May 15
-
PINN is predicting trivial solution for stiff ODE [D]
May 15
-
Looking for a real world dataset (or website where i can find it) [P]
May 15
-
software trying to catch software is officially a dead en [D]
May 15
Discussion (0)
Sign in to join the discussion. Free account, 30 seconds — email code or GitHub.
Sign in →No comments yet. Sign in and be the first to say something.