GPT-5 paper drops on arXiv — scaling laws revisited
Mirrored from Google DeepMind for archival readability. Support the source by reading on the original site.
OpenAI researchers released a 47-page preprint examining how scaling laws hold up at trillion-parameter regimes, with new evidence for compute-optimal training.
This is a seeded sample article injected by /admin/dev-tools for UI testing. The real article body would render here when the cron ingestion pipeline runs.
More from Google DeepMind
-
AlphaEvolve: How our Gemini-powered coding agent is scaling impact across fields
May 6
-
Enabling a new model for healthcare with AI co-clinician
Apr 30
-
Accelerating Mathematical and Scientific Discovery with Gemini Deep Think
Feb 9
-
Project Genie: Experimenting with infinite, interactive worlds
Jan 29
Discussion (2)
Sign in to join the discussion. Free account, 30 seconds — email code or GitHub.
Sign in →Solid drop. The agent tool calling actually works on the first try now — that was a paper cut for ages.
Has anyone benchmarked this against the previous release? Curious about the latency tradeoffs.