Hugging Face Daily Papers · · 4 min read

PersonalAI 2.0: Enhancing knowledge graph traversal/retrieval with planning mechanism for Personalized LLM Agents

Mirrored from Hugging Face Daily Papers for archival readability. Support the source by reading on the original site.

<a href=\"https://cdn-uploads.huggingface.co/production/uploads/64103111b27543634e37eea0/DZZ6_lf8w99WPyYL6g7Fw.jpeg\" rel=\"nofollow\"><img src=\"https://cdn-uploads.huggingface.co/production/uploads/64103111b27543634e37eea0/DZZ6_lf8w99WPyYL6g7Fw.jpeg\" alt=\"PersonalAI2MediumQApipeline\"></a></p>\n","updatedAt":"2026-05-14T08:40:16.156Z","author":{"_id":"64103111b27543634e37eea0","avatarUrl":"/avatars/cefc331f1be10575d1fb2e7a1670a2d9.svg","fullname":"Mikhail Menschikov","name":"dzigen","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.3392481803894043},"editors":["dzigen"],"editorAvatarUrls":["/avatars/cefc331f1be10575d1fb2e7a1670a2d9.svg"],"reactions":[],"isReport":false}}],"primaryEmailConfirmed":false,"paper":{"id":"2605.13481","authors":[{"_id":"6a0589dcb1a8cbabc9f08986","user":{"_id":"64103111b27543634e37eea0","avatarUrl":"/avatars/cefc331f1be10575d1fb2e7a1670a2d9.svg","isPro":false,"fullname":"Mikhail Menschikov","user":"dzigen","type":"user","name":"dzigen"},"name":"Mikhail Menschikov","status":"claimed_verified","statusLastChangedAt":"2026-05-14T10:54:19.741Z","hidden":false},{"_id":"6a0589dcb1a8cbabc9f08987","name":"Matvey Iskornev","hidden":false},{"_id":"6a0589dcb1a8cbabc9f08988","name":"Alexander Kharitonov","hidden":false},{"_id":"6a0589dcb1a8cbabc9f08989","name":"Alina Bogdanova","hidden":false},{"_id":"6a0589dcb1a8cbabc9f0898a","name":"Mikhail Belkin","hidden":false},{"_id":"6a0589dcb1a8cbabc9f0898b","name":"Ekaterina Lisitsyna","hidden":false},{"_id":"6a0589dcb1a8cbabc9f0898c","name":"Artyom Sosedka","hidden":false},{"_id":"6a0589dcb1a8cbabc9f0898d","name":"Victoria Dochkina","hidden":false},{"_id":"6a0589dcb1a8cbabc9f0898e","name":"Ruslan Kostoev","hidden":false},{"_id":"6a0589dcb1a8cbabc9f0898f","name":"Ilia Perepechkin","hidden":false},{"_id":"6a0589dcb1a8cbabc9f08990","name":"Evgeny Burnaev","hidden":false}],"publishedAt":"2026-05-13T00:00:00.000Z","submittedOnDailyAt":"2026-05-14T00:00:00.000Z","title":"PersonalAI 2.0: Enhancing knowledge graph traversal/retrieval with planning mechanism for Personalized LLM Agents","submittedOnDailyBy":{"_id":"64103111b27543634e37eea0","avatarUrl":"/avatars/cefc331f1be10575d1fb2e7a1670a2d9.svg","isPro":false,"fullname":"Mikhail Menschikov","user":"dzigen","type":"user","name":"dzigen"},"summary":"We introduce PersonalAI 2.0 (PAI-2), a novel framework, designed to enhance large language model (LLM) based systems through integration of external knowledge graphs (KG). The proposed approach addresses key limitations of existing Graph Retrieval-Augmented Generation (GraphRAG) methods by incorporating a dynamic, multistage query processing pipeline. The central point of PAI-2 design is its ability to perform adaptive, iterative information search, guided by extracted entities, matched graph vertices and generated clue-queries. Conducted evaluation over six benchmarks (Natural Questions, TriviaQA, HotpotQA, 2WikiMultihopQA, MuSiQue and DiaASQ) demonstrates improvement in factual correctness of generating answers compared to analogues methods (LightRAG, RAPTOR, and HippoRAG 2). PAI-2 achieves 4% average gain by LLM-as-a-Judge across four benchmarks, reflecting its effectiveness in reducing hallucination rates and increasing precision. We show that use of graph traversal algorithms (e.g. BeamSearch, WaterCircles) gain superior results compared to standard flatten retriever on average 6%, while enabled search plan enhancement mechanism gain 18% boost compared to disabled one by LLM-as-a-Judge across six datasets. In addition, ablation study reveals that PAI-2 achieves the SOTA result on MINE-1 benchmark, achieving 89% information-retention score, using LLMs from 7-14B tiers. Collectively, these findings underscore the potential of PAI-2 to serve as a foundational model for next-generation personalized AI applications, requiring scalable, context-aware knowledge representation and reasoning capabilities.","upvotes":1,"discussionId":"6a0589dcb1a8cbabc9f08991","ai_summary":"PersonalAI 2.0 enhances LLM-based systems through external knowledge graph integration with dynamic multistage query processing and adaptive information search mechanisms.","ai_keywords":["large language model","knowledge graphs","Graph Retrieval-Augmented Generation","graph traversal algorithms","BeamSearch","WaterCircles","search plan enhancement","LLM-as-a-Judge","information-retention score"],"organization":{"_id":"639f8fddbeb95d698de119cf","name":"skoltech","fullname":"Skoltech","avatar":"https://cdn-avatars.huggingface.co/v1/production/uploads/61aacef707d0f7b907b53c32/pafMh090XN1A7VLTn15H4.png"}},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"64103111b27543634e37eea0","avatarUrl":"/avatars/cefc331f1be10575d1fb2e7a1670a2d9.svg","isPro":false,"fullname":"Mikhail Menschikov","user":"dzigen","type":"user"}],"acceptLanguages":["en"],"dailyPaperRank":0,"organization":{"_id":"639f8fddbeb95d698de119cf","name":"skoltech","fullname":"Skoltech","avatar":"https://cdn-avatars.huggingface.co/v1/production/uploads/61aacef707d0f7b907b53c32/pafMh090XN1A7VLTn15H4.png"},"markdownContentUrl":"https://huggingface.co/buckets/huggingchat/papers-content/resolve/2605/2605.13481.md"}">
Papers
arxiv:2605.13481

PersonalAI 2.0: Enhancing knowledge graph traversal/retrieval with planning mechanism for Personalized LLM Agents

Published on May 13
· Submitted by
Mikhail Menschikov
on May 14
Authors:
,
,
,
,
,
,
,
,
,

Abstract

PersonalAI 2.0 enhances LLM-based systems through external knowledge graph integration with dynamic multistage query processing and adaptive information search mechanisms.

AI-generated summary

We introduce PersonalAI 2.0 (PAI-2), a novel framework, designed to enhance large language model (LLM) based systems through integration of external knowledge graphs (KG). The proposed approach addresses key limitations of existing Graph Retrieval-Augmented Generation (GraphRAG) methods by incorporating a dynamic, multistage query processing pipeline. The central point of PAI-2 design is its ability to perform adaptive, iterative information search, guided by extracted entities, matched graph vertices and generated clue-queries. Conducted evaluation over six benchmarks (Natural Questions, TriviaQA, HotpotQA, 2WikiMultihopQA, MuSiQue and DiaASQ) demonstrates improvement in factual correctness of generating answers compared to analogues methods (LightRAG, RAPTOR, and HippoRAG 2). PAI-2 achieves 4% average gain by LLM-as-a-Judge across four benchmarks, reflecting its effectiveness in reducing hallucination rates and increasing precision. We show that use of graph traversal algorithms (e.g. BeamSearch, WaterCircles) gain superior results compared to standard flatten retriever on average 6%, while enabled search plan enhancement mechanism gain 18% boost compared to disabled one by LLM-as-a-Judge across six datasets. In addition, ablation study reveals that PAI-2 achieves the SOTA result on MINE-1 benchmark, achieving 89% information-retention score, using LLMs from 7-14B tiers. Collectively, these findings underscore the potential of PAI-2 to serve as a foundational model for next-generation personalized AI applications, requiring scalable, context-aware knowledge representation and reasoning capabilities.

Community

Paper author Paper submitter about 17 hours ago

PersonalAI2MediumQApipeline

Upload images, audio, and videos by dragging in the text input, pasting, or clicking here.
Tap or paste here to upload images

· Sign up or log in to comment

Get this paper in your agent:

hf papers read 2605.13481
Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2605.13481 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2605.13481 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2605.13481 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.

Discussion (0)

Sign in to join the discussion. Free account, 30 seconds — email code or GitHub.

Sign in →

No comments yet. Sign in and be the first to say something.

More from Hugging Face Daily Papers