I taught my 1B to follow instructions. It got worse at following instructions...
Mirrored from r/LocalLLaMA for archival readability. Support the source by reading on the original site.
Same SFT recipe (SlimOrca 50K, LoRA r=16, 1 epoch). Three models trained from scratch at 1B, 2B, and 3B parameters. IFEval before and after:
| Model | Base | After SFT | Delta |
|---|---|---|---|
| 1B | 20.50 | 14.75 | -5.75 |
| 2B | 21.94 | 17.03 | -4.91 |
| 3B | 23.14 | 25.18 | +2.04 |
OK so SFT is supposed to teach instruction-following. thing is though the 1B actually unlearned it. 2B was slightly less bad. The 3B finally read the room.
Setups were slightly different: 3B used lr=5e-5, the others used 2e-4. So maybe it's capacity, maybe it's the gentler LR. I'll re-run the 2B at 5e-5 to find out.
Before I burn the compute:
- Anyone else seen IFEval regress after SFT on small models?
- Is this a known thing I missed?
- Best guess on mechanism?
Receipts available if anyone wants to dig in.
[link] [comments]
More from r/LocalLLaMA
-
Built an open-source one-prompt-to-cinematic-reel pipeline on a single GPU — FLUX.2 [klein] for character keyframes, Wan2.2-I2V for animation, vision critic with auto-retry, music + 9-language narration in the same pipeline
May 14
-
Anyone actually using a local LLM as their daily knowledge base? Not for coding, for life stuff. What's your setup?
May 14
-
Computer-use MCP that can control multiple machines (Integrate with claude, Cursor, Codex or your custom harness)
May 14
-
Multi-Token Prediction (MTP) for Qwen on LLaMA.cpp + TurboQuant
May 14
Discussion (0)
Sign in to join the discussion. Free account, 30 seconds — email code or GitHub.
Sign in →No comments yet. Sign in and be the first to say something.