r/MachineLearning · · 1 min read

I Found a Hidden Ratio in Transformers That Predicts Geometric Stability [R]

Mirrored from r/MachineLearning for archival readability. Support the source by reading on the original site.

I have analyzed some decoder transformer models using Lyapunov spectral analysis and found that the ratio of the MLP and attention spectral norms strongly indicates whether a model will eventually collapse to rank-1 or not by the final layers.

I found that the spectral ratio is best kept around 0.5–2 for keeping the model stable till the final layers.

Paper/Github repo: https://github.com/yousef-rafat/the-1-1-rule

submitted by /u/Otaku_7nfy
[link] [comments]

Discussion (0)

Sign in to join the discussion. Free account, 30 seconds — email code or GitHub.

Sign in →

No comments yet. Sign in and be the first to say something.

More from r/MachineLearning