r/LocalLLaMA · · 1 min read

I taught my 1B to follow instructions. It got worse at following instructions...

Mirrored from r/LocalLLaMA for archival readability. Support the source by reading on the original site.

Same SFT recipe (SlimOrca 50K, LoRA r=16, 1 epoch). Three models trained from scratch at 1B, 2B, and 3B parameters. IFEval before and after:

Model Base After SFT Delta
1B 20.50 14.75 -5.75
2B 21.94 17.03 -4.91
3B 23.14 25.18 +2.04

OK so SFT is supposed to teach instruction-following. thing is though the 1B actually unlearned it. 2B was slightly less bad. The 3B finally read the room.

Setups were slightly different: 3B used lr=5e-5, the others used 2e-4. So maybe it's capacity, maybe it's the gentler LR. I'll re-run the 2B at 5e-5 to find out.

Before I burn the compute:

  1. Anyone else seen IFEval regress after SFT on small models?
  2. Is this a known thing I missed?
  3. Best guess on mechanism?

Receipts available if anyone wants to dig in.

submitted by /u/GPUburnout
[link] [comments]

Discussion (0)

Sign in to join the discussion. Free account, 30 seconds — email code or GitHub.

Sign in →

No comments yet. Sign in and be the first to say something.

More from r/LocalLLaMA