Ars Technica — AI · · 1 min read

Spooked by Mythos, Trump suddenly realized AI safety testing might be good

Mirrored from Ars Technica — AI for archival readability. Support the source by reading on the original site.

This week, the Trump administration back pedaled and signed agreements with Google DeepMind, Microsoft, and xAI to run government safety checks on the firms' frontier AI models before and after their release.

Previously, Donald Trump had stubbornly cast aside the Biden-era policy, dismissing the need for voluntary safety checks as overregulation blocking unbridled innovation. Soon after taking office, he took the extra step of rebranding the US AI Safety Institute to the Center for AI Standards and Innovation (CAISI), removing "safety" from the name in a pointed jab at Joe Biden.

But after Anthropic announced that it would be too risky to release its latest Claude Mythos model—fearing that bad actors might exploit its advanced cybersecurity capabilities—Trump is suddenly concerned about AI safety. According to White House National Economic Council Director Kevin Hassett, Trump may soon issue an executive order mandating government testing of advanced AI systems prior to release, Fortune reported.

Read full article

Comments

Discussion (0)

Sign in to join the discussion. Free account, 30 seconds — email code or GitHub.

Sign in →

No comments yet. Sign in and be the first to say something.

More from Ars Technica — AI