Hugging Face Daily Papers · · 4 min read

WildTableBench: Benchmarking Multimodal Foundation Models on Table Understanding In the Wild

Mirrored from Hugging Face Daily Papers for archival readability. Support the source by reading on the original site.

We introduce WildTableBench, the first QA benchmark for evaluating multimodal foundation models on naturally occurring table images collected from real-world web sources (Reddit, Pinterest, etc.). Unlike prior benchmarks built on structured text or clean rendered images, WildTableBench features 402 real-world table images (screenshots, scans, and photos) with 928 manually annotated questions across 17 subtypes in 5 categories — covering numerical reasoning, fact verification, cell locating, hypothetical reasoning, and color-based reasoning.<br>We evaluate 21 frontier models (GPT-5.2, Gemini-3-Pro, Claude Sonnet 4.6, Qwen3-VL, Kimi K2.5, GLM-4.6V, etc.). The best model (Gemini-3-Pro) achieves only 67.9% accuracy; all others score below 50%. WildTableBench reveals persistent gaps in structural perception and reasoning that existing benchmarks miss, establishing it as a rigorous diagnostic tool for real-world table understanding.</p>\n","updatedAt":"2026-05-15T19:58:39.979Z","author":{"_id":"6847d904a8197cb5e57060ae","avatarUrl":"/avatars/21da9a5b28e5b622191f69bd4a4d8239.svg","fullname":"HJZ","name":"jzhuang","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.8083621263504028},"editors":["jzhuang"],"editorAvatarUrls":["/avatars/21da9a5b28e5b622191f69bd4a4d8239.svg"],"reactions":[],"isReport":false}}],"primaryEmailConfirmed":false,"paper":{"id":"2605.01018","authors":[{"_id":"69fa8ca9cfa95edeb1e79e0c","user":{"_id":"6847d904a8197cb5e57060ae","avatarUrl":"/avatars/21da9a5b28e5b622191f69bd4a4d8239.svg","isPro":false,"fullname":"HJZ","user":"jzhuang","type":"user","name":"jzhuang"},"name":"Junzhe Huang","status":"claimed_verified","statusLastChangedAt":"2026-05-10T16:52:18.770Z","hidden":false},{"_id":"69fa8ca9cfa95edeb1e79e0d","name":"Xiaoxiao Sun","hidden":false},{"_id":"69fa8ca9cfa95edeb1e79e0e","name":"Yan Yang","hidden":false},{"_id":"69fa8ca9cfa95edeb1e79e0f","name":"Yuxuan Hou","hidden":false},{"_id":"69fa8ca9cfa95edeb1e79e10","name":"Ruotian Zhang","hidden":false},{"_id":"69fa8ca9cfa95edeb1e79e11","name":"Sirui Li","hidden":false},{"_id":"69fa8ca9cfa95edeb1e79e12","name":"Hehe Fan","hidden":false},{"_id":"69fa8ca9cfa95edeb1e79e13","name":"Serena Yeung-Levy","hidden":false},{"_id":"69fa8ca9cfa95edeb1e79e14","name":"Xin Yu","hidden":false}],"publishedAt":"2026-05-01T00:00:00.000Z","submittedOnDailyAt":"2026-05-15T00:00:00.000Z","title":"WildTableBench: Benchmarking Multimodal Foundation Models on Table Understanding In the Wild","submittedOnDailyBy":{"_id":"6847d904a8197cb5e57060ae","avatarUrl":"/avatars/21da9a5b28e5b622191f69bd4a4d8239.svg","isPro":false,"fullname":"HJZ","user":"jzhuang","type":"user","name":"jzhuang"},"summary":"Using multimodal foundation models to analyze table images is a high-value yet challenging application in consumer and enterprise scenarios. Despite its importance, current evaluations rely largely on structured-text tables or clean rendered images, leaving the visual complexity of in-the-wild table images underexplored. Such images feature varied layouts and diverse domains that demand sophisticated structural perception and numerical reasoning. To bridge this gap, we introduce WildTableBench, the first question-answering benchmark for naturally occurring table images from real-world settings. WildTableBench comprises 402 high-information-density table images collected from online forums and websites across diverse domains, together with 928 manually annotated and verified questions spanning 17 subtypes across five categories. We evaluate 21 frontier proprietary and open-source multimodal foundation models on this benchmark. Only one model exceeds 50% accuracy, while all remaining models range from 4.1% to 49.9%. We further conduct diagnostic analyses to characterize model failures and reveal persistent weaknesses in structural perception and reasoning. These results and analyses provide useful insights into current model capabilities and establish WildTableBench as a valuable diagnostic benchmark for table image understanding.","upvotes":4,"discussionId":"69fa8ca9cfa95edeb1e79e15","projectPage":"https://huggingface.co/datasets/jzhuang/WildTableBench","githubRepo":"https://github.com/hjzhe/WildTableBench","githubRepoAddedBy":"user","ai_summary":"WildTableBench is introduced as the first question-answering benchmark for real-world table images, revealing significant challenges in structural perception and numerical reasoning for existing multimodal models.","ai_keywords":["multimodal foundation models","table images","question-answering benchmark","structural perception","numerical reasoning","diagnostic analysis"],"githubStars":0,"organization":{"_id":"64351c487d013020da3a7301","name":"TheUniversityofQueensland","fullname":"The University of Queensland","avatar":"https://cdn-avatars.huggingface.co/v1/production/uploads/643518b90a3d480c6cf10e2a/nL3MCdWbtt_v9BUaGw8U0.png"}},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"64c749e871947b03ff2af95c","avatarUrl":"/avatars/c87f218a2561924e3d2854500749fae9.svg","isPro":false,"fullname":"sun","user":"xxsun","type":"user"},{"_id":"6847d904a8197cb5e57060ae","avatarUrl":"/avatars/21da9a5b28e5b622191f69bd4a4d8239.svg","isPro":false,"fullname":"HJZ","user":"jzhuang","type":"user"},{"_id":"69f90e0e191f57df815e0722","avatarUrl":"/avatars/009a043d93a768e2fee9df15b38cc753.svg","isPro":false,"fullname":"salart","user":"salartvqa","type":"user"},{"_id":"6357362f811ee2fa05070f64","avatarUrl":"/avatars/2cf37efb80f5cfb3e4e9d08674de6dd1.svg","isPro":false,"fullname":"Dongxu Li","user":"dxli1","type":"user"}],"acceptLanguages":["en"],"dailyPaperRank":0,"organization":{"_id":"64351c487d013020da3a7301","name":"TheUniversityofQueensland","fullname":"The University of Queensland","avatar":"https://cdn-avatars.huggingface.co/v1/production/uploads/643518b90a3d480c6cf10e2a/nL3MCdWbtt_v9BUaGw8U0.png"}}">
Papers
arxiv:2605.01018

WildTableBench: Benchmarking Multimodal Foundation Models on Table Understanding In the Wild

Published on May 1
· Submitted by
HJZ
on May 15
Authors:
,
,
,
,
,
,
,

Abstract

WildTableBench is introduced as the first question-answering benchmark for real-world table images, revealing significant challenges in structural perception and numerical reasoning for existing multimodal models.

AI-generated summary

Using multimodal foundation models to analyze table images is a high-value yet challenging application in consumer and enterprise scenarios. Despite its importance, current evaluations rely largely on structured-text tables or clean rendered images, leaving the visual complexity of in-the-wild table images underexplored. Such images feature varied layouts and diverse domains that demand sophisticated structural perception and numerical reasoning. To bridge this gap, we introduce WildTableBench, the first question-answering benchmark for naturally occurring table images from real-world settings. WildTableBench comprises 402 high-information-density table images collected from online forums and websites across diverse domains, together with 928 manually annotated and verified questions spanning 17 subtypes across five categories. We evaluate 21 frontier proprietary and open-source multimodal foundation models on this benchmark. Only one model exceeds 50% accuracy, while all remaining models range from 4.1% to 49.9%. We further conduct diagnostic analyses to characterize model failures and reveal persistent weaknesses in structural perception and reasoning. These results and analyses provide useful insights into current model capabilities and establish WildTableBench as a valuable diagnostic benchmark for table image understanding.

Community

Paper author Paper submitter about 5 hours ago

We introduce WildTableBench, the first QA benchmark for evaluating multimodal foundation models on naturally occurring table images collected from real-world web sources (Reddit, Pinterest, etc.). Unlike prior benchmarks built on structured text or clean rendered images, WildTableBench features 402 real-world table images (screenshots, scans, and photos) with 928 manually annotated questions across 17 subtypes in 5 categories — covering numerical reasoning, fact verification, cell locating, hypothetical reasoning, and color-based reasoning.
We evaluate 21 frontier models (GPT-5.2, Gemini-3-Pro, Claude Sonnet 4.6, Qwen3-VL, Kimi K2.5, GLM-4.6V, etc.). The best model (Gemini-3-Pro) achieves only 67.9% accuracy; all others score below 50%. WildTableBench reveals persistent gaps in structural perception and reasoning that existing benchmarks miss, establishing it as a rigorous diagnostic tool for real-world table understanding.

Upload images, audio, and videos by dragging in the text input, pasting, or clicking here.
Tap or paste here to upload images

· Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2605.01018 in a model README.md to link it from this page.

Datasets citing this paper 1

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2605.01018 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.

Discussion (0)

Sign in to join the discussion. Free account, 30 seconds — email code or GitHub.

Sign in →

No comments yet. Sign in and be the first to say something.

More from Hugging Face Daily Papers