<a href=\"https://cdn-uploads.huggingface.co/production/uploads/66588c8a338165aad1516756/X6BtWlexw48jeJsEppk3w.png\" rel=\"nofollow\"><img src=\"https://cdn-uploads.huggingface.co/production/uploads/66588c8a338165aad1516756/X6BtWlexw48jeJsEppk3w.png\" alt=\"story_figure-1\"></a></p>\n","updatedAt":"2026-05-14T09:57:05.086Z","author":{"_id":"66588c8a338165aad1516756","avatarUrl":"/avatars/c6539b4ef65f465f6f762628d6921be6.svg","fullname":"JZPeterPan","name":"JZPeterPan","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":5,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.40611356496810913},"editors":["JZPeterPan"],"editorAvatarUrls":["/avatars/c6539b4ef65f465f6f762628d6921be6.svg"],"reactions":[],"isReport":false}}],"primaryEmailConfirmed":false,"paper":{"id":"2605.13542","authors":[{"_id":"6a059689b1a8cbabc9f089b9","name":"Chengzhi Shen","hidden":false},{"_id":"6a059689b1a8cbabc9f089ba","name":"Weixiang Shen","hidden":false},{"_id":"6a059689b1a8cbabc9f089bb","name":"Tobias Susetzky","hidden":false},{"_id":"6a059689b1a8cbabc9f089bc","name":"Chen","hidden":false},{"_id":"6a059689b1a8cbabc9f089bd","name":"Chen","hidden":false},{"_id":"6a059689b1a8cbabc9f089be","name":"Jun Li","hidden":false},{"_id":"6a059689b1a8cbabc9f089bf","name":"Yuyuan Liu","hidden":false},{"_id":"6a059689b1a8cbabc9f089c0","name":"Xuepeng Zhang","hidden":false},{"_id":"6a059689b1a8cbabc9f089c1","name":"Zhenyu Gong","hidden":false},{"_id":"6a059689b1a8cbabc9f089c2","name":"Daniel Rueckert","hidden":false},{"_id":"6a059689b1a8cbabc9f089c3","name":"Jiazhen Pan","hidden":false}],"publishedAt":"2026-05-13T00:00:00.000Z","submittedOnDailyAt":"2026-05-14T00:00:00.000Z","title":"RealICU: Do LLM Agents Understand Long-Context ICU Data? A Benchmark Beyond Behavior Imitation","submittedOnDailyBy":{"_id":"66588c8a338165aad1516756","avatarUrl":"/avatars/c6539b4ef65f465f6f762628d6921be6.svg","isPro":false,"fullname":"JZPeterPan","user":"JZPeterPan","type":"user","name":"JZPeterPan"},"summary":"Intensive care units (ICU) generate long, dense and evolving streams of clinical information, where physicians must repeatedly reassess patient states under time pressure, underscoring a clear need for reliable AI decision support. Existing ICU benchmarks typically treat historical clinician actions as ground truth. However, these actions are made under incomplete information and limited temporal context of the underlying patient state, and may therefore be suboptimal, making it difficult to assess the true reasoning capabilities of AI systems. We introduce RealICU, a hindsight-annotated benchmark for evaluating large language models (LLMs) under realistic ICU conditions, where labels are created after senior physicians review the full patient trajectory. We formulate four physician-motivated tasks: assess Patient Status, Acute Problems, Recommended Actions, and Red Flag actions that risk unsafe outcomes. We partition each trajectory with 30-min windows and release two datasets: RealICU-Gold with 930-window annotations from 94 MIMIC-IV patients, and RealICU-Scale with 11,862 windows extended by Oracle, a physician-validated LLM hindsight labeler. Existing LLMs including memory-augmented ones performed poorly on RealICU, exposing two failure modes: a recall-safety tradeoff for clinical recommendations, and an anchoring bias to early interpretations of the patient. We further introduce ICU-Evo to study structured-memory agents that improves long-horizon reasoning but does not fully eliminate safety failures. Together, RealICU provides a clinically grounded testbed for measuring and improving AI sequential decision-support in high-stakes care. Project page: https://chengzhi-leo.github.io/RealICU-Bench/","upvotes":6,"discussionId":"6a059689b1a8cbabc9f089c4","projectPage":"https://chengzhi-leo.github.io/RealICU-Bench/","githubRepo":"https://github.com/chengzhi-leo/RealICU-Bench","githubRepoAddedBy":"user","ai_summary":"RealICU benchmark evaluates large language models for ICU decision support using hindsight-annotated patient trajectories, revealing limitations in clinical recommendation accuracy and early interpretation bias.","ai_keywords":["large language models","ICU","hindsight-annotation","patient trajectory","clinical decision support","memory-augmented models","structured-memory agents","sequential decision-support"],"githubStars":1,"organization":{"_id":"61fae781e68759322b9767be","name":"TUM","fullname":"Technical University of Munich","avatar":"https://cdn-avatars.huggingface.co/v1/production/uploads/1661167219960-629521a0f937190946e15d7f.jpeg"}},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"637f729db3cb8158e17444c5","avatarUrl":"/avatars/b1ff654873f8a2c83c3e4249a796a097.svg","isPro":false,"fullname":"leoshen","user":"leoshen","type":"user"},{"_id":"66588c8a338165aad1516756","avatarUrl":"/avatars/c6539b4ef65f465f6f762628d6921be6.svg","isPro":false,"fullname":"JZPeterPan","user":"JZPeterPan","type":"user"},{"_id":"69a412cddf434606269c083b","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/-VVbIUAxcyuGxO8yc88gT.png","isPro":false,"fullname":"山下拓海","user":"WILLIAMR803","type":"user"},{"_id":"6526523fa754153bd4da45f3","avatarUrl":"/avatars/848b234fcd59d9885ddb673ffea20a63.svg","isPro":false,"fullname":"RioJune","user":"RioJune","type":"user"},{"_id":"67c9bf8cc65675efa095249c","avatarUrl":"/avatars/5918509097c28a329bf0f32d5ca89cd6.svg","isPro":false,"fullname":"Yue Zhou","user":"Yue0926","type":"user"},{"_id":"6a0659c95a494a76144594c0","avatarUrl":"/avatars/bf18bb4703b0675c1a08dec4a025354b.svg","isPro":false,"fullname":"Yufei Quan","user":"YufeiQ","type":"user"}],"acceptLanguages":["en"],"dailyPaperRank":0,"organization":{"_id":"61fae781e68759322b9767be","name":"TUM","fullname":"Technical University of Munich","avatar":"https://cdn-avatars.huggingface.co/v1/production/uploads/1661167219960-629521a0f937190946e15d7f.jpeg"},"markdownContentUrl":"https://huggingface.co/buckets/huggingchat/papers-content/resolve/2605/2605.13542.md"}">
RealICU: Do LLM Agents Understand Long-Context ICU Data? A Benchmark Beyond Behavior Imitation
Authors: ,
,
,
,
,
,
,
,
,
,
Abstract
RealICU benchmark evaluates large language models for ICU decision support using hindsight-annotated patient trajectories, revealing limitations in clinical recommendation accuracy and early interpretation bias.
AI-generated summary
Intensive care units (ICU) generate long, dense and evolving streams of clinical information, where physicians must repeatedly reassess patient states under time pressure, underscoring a clear need for reliable AI decision support. Existing ICU benchmarks typically treat historical clinician actions as ground truth. However, these actions are made under incomplete information and limited temporal context of the underlying patient state, and may therefore be suboptimal, making it difficult to assess the true reasoning capabilities of AI systems. We introduce RealICU, a hindsight-annotated benchmark for evaluating large language models (LLMs) under realistic ICU conditions, where labels are created after senior physicians review the full patient trajectory. We formulate four physician-motivated tasks: assess Patient Status, Acute Problems, Recommended Actions, and Red Flag actions that risk unsafe outcomes. We partition each trajectory with 30-min windows and release two datasets: RealICU-Gold with 930-window annotations from 94 MIMIC-IV patients, and RealICU-Scale with 11,862 windows extended by Oracle, a physician-validated LLM hindsight labeler. Existing LLMs including memory-augmented ones performed poorly on RealICU, exposing two failure modes: a recall-safety tradeoff for clinical recommendations, and an anchoring bias to early interpretations of the patient. We further introduce ICU-Evo to study structured-memory agents that improves long-horizon reasoning but does not fully eliminate safety failures. Together, RealICU provides a clinically grounded testbed for measuring and improving AI sequential decision-support in high-stakes care. Project page: https://chengzhi-leo.github.io/RealICU-Bench/
Community
Upload images, audio, and videos by dragging in the text input, pasting, or clicking here.
Tap or paste here to upload images
Cite arxiv.org/abs/2605.13542 in a model README.md to link it from this page.
Cite arxiv.org/abs/2605.13542 in a dataset README.md to link it from this page.
Cite arxiv.org/abs/2605.13542 in a Space README.md to link it from this page.
Discussion (0)
Sign in to join the discussion. Free account, 30 seconds — email code or GitHub.
Sign in →No comments yet. Sign in and be the first to say something.