Smol AI News · · 1 min read

not much happened today

Mirrored from Smol AI News for archival readability. Support the source by reading on the original site.

**MiniMax M2.1** launches as an **open-source** agent and coding Mixture-of-Experts (MoE) model with **~10B active / ~230B total parameters**, claiming to outperform **Gemini 3 Pro** and **Claude Sonnet 4.5**, and supports local inference including on **Apple Silicon M3 Ultra** with quantization. **GLM 4.7** demonstrates local scaling on **Mac Studios** with **2× 512GB M3 Ultra** hardware, highlighting system-level challenges like bandwidth and parallelism. The concept of **inference quality** is emphasized as a key factor affecting output variance across deployments. Yann LeCun's **VL-JEPA** proposes a **non-generative, non-autoregressive** multimodal model operating in latent space for efficient real-time video processing with fewer parameters and decoding operations. Advances in agentic reinforcement learning for coding include self-play methods where agents inject and fix bugs autonomously, enabling self-improvement without human labeling, and large-scale RL infrastructure involving massive parallel code generation and execution sandboxes.

Discussion (0)

Sign in to join the discussion. Free account, 30 seconds — email code or GitHub.

Sign in →

No comments yet. Sign in and be the first to say something.

More from Smol AI News