r/LocalLLaMA · · 1 min read

ByteDance-Seed/Cola-DLM · Hugging Face

Mirrored from r/LocalLLaMA for archival readability. Support the source by reading on the original site.

ByteDance-Seed/Cola-DLM · Hugging Face

Cola DLM (Continuous Latent Diffusion Language Model) is a hierarchical continuous latent-space diffusion language model. It combines a Text VAE with a block-causal Diffusion Transformer (DiT) prior: the VAE maps text into continuous latent sequences and decodes latents back to tokens, while the DiT performs latent prior transport through Flow Matching.

This model repository contains the HuggingFace-format checkpoint for the paper Continuous Latent Diffusion Language Model.

Links

Model Details

  • Architecture: Text VAE + block-causal DiT latent prior.
  • Training objective: two-stage training with Text VAE pretraining followed by joint Text VAE + DiT training using Flow Matching.
  • Training-compute checkpoint: the released weights correspond to the 2000 EFLOPs checkpoint reported in the paper's RQ4 scaling curve.
  • Tokenizer: OLMo 2 tokenizer with a 100,278-entry vocabulary.
  • Special token ids: pad_token_id=100277, eos_token_id=100257, im_end_token_id=100265.
  • Framework: PyTorch 2.1+ and HuggingFace Transformers 4.40+.
  • License: Apache License 2.0.
submitted by /u/pmttyji
[link] [comments]

Discussion (0)

Sign in to join the discussion. Free account, 30 seconds — email code or GitHub.

Sign in →

No comments yet. Sign in and be the first to say something.

More from r/LocalLLaMA