Human-level performance via ML was *not* proven impossible with complexity theory [D]
Mirrored from r/MachineLearning for archival readability. Support the source by reading on the original site.
Van Rooij, Guest, Adolfi, Kolokolova, and Rich claimed to have proven that AGI via ML is impossible in Computational Brain & Behavior in 2024. The basic idea was to try to reduce a known NP-hard problem to the problem of learning a human-level classifier from data. The purported result, called "Ingenia Theorem" by the authors, made some noise on the internet, including here.
My paper showing that the proof is irreparably broken is now also out in CBB (ungated preprint here).
The basic issue is that "human-level classifier" is not mathematically defined, which the authors solve by ... never defining it. They have a construct that corresponds to "distribution of human situation-behaviour tuples" when they introduce the problem, but the construct then gets swapped out for "for all polytime-sampleable distributions" when it comes time to doing the formal proof. This means that the paper, if you find-and-replace human situation-behavior tuples for ImageNet inputs/labels, also proves that learning to classify ImageNet is intractable.
Blogpost discussion similar attempts from Penrose to Chomsky here.
[link] [comments]
More from r/MachineLearning
-
Trained transformer-based chess models to play like humans (including thinking time) [P]
May 13
-
Scenema Audio: Zero-shot expressive voice cloning and speech generation [N]
May 13
-
What kinds of models are people training with document data? [P]
May 13
-
Have the "on-hold" durations been getting longer for arXiv submissions? [D]
May 13
Discussion (0)
Sign in to join the discussion. Free account, 30 seconds — email code or GitHub.
Sign in →No comments yet. Sign in and be the first to say something.