Google DeepMind ·

GPT-5 paper drops on arXiv — scaling laws revisited

Mirrored from Google DeepMind for archival readability. Support the source by reading on the original site.

OpenAI researchers released a 47-page preprint examining how scaling laws hold up at trillion-parameter regimes, with new evidence for compute-optimal training.

This is a seeded sample article injected by /admin/dev-tools for UI testing. The real article body would render here when the cron ingestion pipeline runs.

Discussion (2)

Sign in to join the discussion. Free account, 30 seconds — email code or GitHub.

Sign in →
  • BE
    Bea

    Solid drop. The agent tool calling actually works on the first try now — that was a paper cut for ages.

  • AL
    Alex Pro

    Has anyone benchmarked this against the previous release? Curious about the latency tradeoffs.

More from Google DeepMind