With the rapid evolution of foundation models, Large Language Model (LLM) agents have demonstrated increasingly powerful tool-use capabilities. However, this proficiency introduces significant security risks, as malicious actors can manipulate agents into executing tools to generate harmful content. While existing defensive mechanisms are effective, they frequently suffer from the over-refusal problem, where increased safety strictness compromises the agent's utility on benign tasks. To mitigate this trade-off, we propose SafeHarbor, a novel framework designed to establish precise decision boundaries for LLM agents. Unlike static guidelines, SafeHarbor extracts context-aware defense rules through enhanced adversarial generation. We design a local hierarchical memory system for dynamic rule injection, offering a training-free, efficient, and plug-and-play solution. Furthermore, we introduce an information entropy-based self-evolution mechanism that continuously optimizes the memory structure through dynamic node splitting and merging. Extensive experiments demonstrate that SafeHarbor achieves state-of-the-art performance on both ambiguous benign tasks and explicit malicious attacks, notably attaining a peak benign utility of 63.6% on GPT-4o while maintaining a robust refusal rate exceeding 93% against harmful requests.</p>\n","updatedAt":"2026-05-14T05:18:17.211Z","author":{"_id":"63186f6d09146074fe8e884e","avatarUrl":"/avatars/5b1943da9c7c414384a1b2c4386213f7.svg","fullname":"ZheLiu","name":"ljjDL","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.8916013240814209},"editors":["ljjDL"],"editorAvatarUrls":["/avatars/5b1943da9c7c414384a1b2c4386213f7.svg"],"reactions":[],"isReport":false}},{"id":"6a067a70225b64c39390d804","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":355,"isUserFollowing":false},"createdAt":"2026-05-15T01:44:16.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [TwinGate: Stateful Defense against Decompositional Jailbreaks in Untraceable Traffic via Asymmetric Contrastive Learning](https://huggingface.co/papers/2604.27861) (2026)\n* [Invisible Threats from Model Context Protocol: Generating Stealthy Injection Payload via Tree-based Adaptive Search](https://huggingface.co/papers/2603.24203) (2026)\n* [Your Agent is More Brittle Than You Think: Uncovering Indirect Injection Vulnerabilities in Agentic LLMs](https://huggingface.co/papers/2604.03870) (2026)\n* [Disentangling Intent from Role: Adversarial Self-Play for Persona-Invariant Safety Alignment](https://huggingface.co/papers/2605.01899) (2026)\n* [PlanGuard: Defending Agents against Indirect Prompt Injection via Planning-based Consistency Verification](https://huggingface.co/papers/2604.10134) (2026)\n* [SafeSeek: Universal Attribution of Safety Circuits in Language Models](https://huggingface.co/papers/2603.23268) (2026)\n* [Transient Turn Injection: Exposing Stateless Multi-Turn Vulnerabilities in Large Language Models](https://huggingface.co/papers/2604.21860) (2026)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`","html":"<p>This is an automated message from the <a href=\"https://huggingface.co/librarian-bots\">Librarian Bot</a>. I found the following papers similar to this paper. </p>\n<p>The following papers were recommended by the Semantic Scholar API </p>\n<ul>\n<li><a href=\"https://huggingface.co/papers/2604.27861\">TwinGate: Stateful Defense against Decompositional Jailbreaks in Untraceable Traffic via Asymmetric Contrastive Learning</a> (2026)</li>\n<li><a href=\"https://huggingface.co/papers/2603.24203\">Invisible Threats from Model Context Protocol: Generating Stealthy Injection Payload via Tree-based Adaptive Search</a> (2026)</li>\n<li><a href=\"https://huggingface.co/papers/2604.03870\">Your Agent is More Brittle Than You Think: Uncovering Indirect Injection Vulnerabilities in Agentic LLMs</a> (2026)</li>\n<li><a href=\"https://huggingface.co/papers/2605.01899\">Disentangling Intent from Role: Adversarial Self-Play for Persona-Invariant Safety Alignment</a> (2026)</li>\n<li><a href=\"https://huggingface.co/papers/2604.10134\">PlanGuard: Defending Agents against Indirect Prompt Injection via Planning-based Consistency Verification</a> (2026)</li>\n<li><a href=\"https://huggingface.co/papers/2603.23268\">SafeSeek: Universal Attribution of Safety Circuits in Language Models</a> (2026)</li>\n<li><a href=\"https://huggingface.co/papers/2604.21860\">Transient Turn Injection: Exposing Stateless Multi-Turn Vulnerabilities in Large Language Models</a> (2026)</li>\n</ul>\n<p> Please give a thumbs up to this comment if you found it helpful!</p>\n<p> If you want recommendations for any Paper on Hugging Face checkout <a href=\"https://huggingface.co/spaces/librarian-bots/recommend_similar_papers\">this</a> Space</p>\n<p> You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: <code><span class=\"SVELTE_PARTIAL_HYDRATER contents\" data-target=\"UserMention\" data-props=\"{"user":"librarian-bot"}\"><span class=\"inline-block\"><span class=\"contents\"><a href=\"/librarian-bot\">@<span class=\"underline\">librarian-bot</span></a></span> </span></span> recommend</code></p>\n","updatedAt":"2026-05-15T01:44:16.074Z","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":355,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.7523293495178223},"editors":["librarian-bot"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg"],"reactions":[],"isReport":false}}],"primaryEmailConfirmed":false,"paper":{"id":"2605.05704","authors":[{"_id":"69fddcec675c142cf74ad2cf","user":{"_id":"63186f6d09146074fe8e884e","avatarUrl":"/avatars/5b1943da9c7c414384a1b2c4386213f7.svg","isPro":false,"fullname":"ZheLiu","user":"ljjDL","type":"user","name":"ljjDL"},"name":"Zhe Liu","status":"claimed_verified","statusLastChangedAt":"2026-05-10T16:49:10.948Z","hidden":false},{"_id":"69fddcec675c142cf74ad2d0","name":"Zonghao Ying","hidden":false},{"_id":"69fddcec675c142cf74ad2d1","name":"Wenxin Zhang","hidden":false},{"_id":"69fddcec675c142cf74ad2d2","name":"Quanchen Zou","hidden":false},{"_id":"69fddcec675c142cf74ad2d3","name":"Deyue Zhang","hidden":false},{"_id":"69fddcec675c142cf74ad2d4","name":"Dongdong Yang","hidden":false},{"_id":"69fddcec675c142cf74ad2d5","name":"Xiangzheng Zhang","hidden":false},{"_id":"69fddcec675c142cf74ad2d6","name":"Hao Peng","hidden":false}],"publishedAt":"2026-05-07T00:00:00.000Z","submittedOnDailyAt":"2026-05-14T00:00:00.000Z","title":"SafeHarbor: Hierarchical Memory-Augmented Guardrail for LLM Agent Safety","submittedOnDailyBy":{"_id":"63186f6d09146074fe8e884e","avatarUrl":"/avatars/5b1943da9c7c414384a1b2c4386213f7.svg","isPro":false,"fullname":"ZheLiu","user":"ljjDL","type":"user","name":"ljjDL"},"summary":"With the rapid evolution of foundation models, Large Language Model (LLM) agents have demonstrated increasingly powerful tool-use capabilities. However, this proficiency introduces significant security risks, as malicious actors can manipulate agents into executing tools to generate harmful content. While existing defensive mechanisms are effective, they frequently suffer from the over-refusal problem, where increased safety strictness compromises the agent's utility on benign tasks. To mitigate this trade-off, we propose SafeHarbor, a novel framework designed to establish precise decision boundaries for LLM agents. Unlike static guidelines, SafeHarbor extracts context-aware defense rules through enhanced adversarial generation. We design a local hierarchical memory system for dynamic rule injection, offering a training-free, efficient, and plug-and-play solution. Furthermore, we introduce an information entropy-based self-evolution mechanism that continuously optimizes the memory structure through dynamic node splitting and merging. Extensive experiments demonstrate that SafeHarbor achieves state-of-the-art performance on both ambiguous benign tasks and explicit malicious attacks, notably attaining a peak benign utility of 63.6\\% on GPT-4o while maintaining a robust refusal rate exceeding 93\\% against harmful requests. The source code is publicly available at https://github.com/ljj-cyber/SafeHarbor.","upvotes":1,"discussionId":"69fddcec675c142cf74ad2d7","githubRepo":"https://github.com/ljj-cyber/SafeHarbor","githubRepoAddedBy":"user","ai_summary":"SafeHarbor is a novel framework for LLM agents that establishes precise decision boundaries through context-aware defense rules, featuring a hierarchical memory system and self-evolution mechanism to balance safety and utility.","ai_keywords":["Large Language Model agents","tool-use capabilities","adversarial generation","local hierarchical memory system","information entropy-based self-evolution mechanism","decision boundaries","defense rules","dynamic rule injection","training-free solution","plug-and-play solution"],"githubStars":5,"organization":{"_id":"63ba7720fc454697637969f1","name":"Beihang","fullname":"Beihang University","avatar":"https://cdn-avatars.huggingface.co/v1/production/uploads/63ba7666c138c8f2b7844b58/n98lZU9VWxYgWIkzE_6o4.jpeg"}},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"69ccc2ce54b48932315b9d82","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/89vgUJNlDlq7BaELo7HAk.jpeg","isPro":false,"fullname":"高橋美咲","user":"wyattgreen","type":"user"}],"acceptLanguages":["en"],"dailyPaperRank":0,"organization":{"_id":"63ba7720fc454697637969f1","name":"Beihang","fullname":"Beihang University","avatar":"https://cdn-avatars.huggingface.co/v1/production/uploads/63ba7666c138c8f2b7844b58/n98lZU9VWxYgWIkzE_6o4.jpeg"},"markdownContentUrl":"https://huggingface.co/buckets/huggingchat/papers-content/resolve/2605/2605.05704.md"}">
SafeHarbor: Hierarchical Memory-Augmented Guardrail for LLM Agent Safety
Published on May 7
· Submitted by ZheLiu on May 14 Abstract
SafeHarbor is a novel framework for LLM agents that establishes precise decision boundaries through context-aware defense rules, featuring a hierarchical memory system and self-evolution mechanism to balance safety and utility.
AI-generated summary
With the rapid evolution of foundation models, Large Language Model (LLM) agents have demonstrated increasingly powerful tool-use capabilities. However, this proficiency introduces significant security risks, as malicious actors can manipulate agents into executing tools to generate harmful content. While existing defensive mechanisms are effective, they frequently suffer from the over-refusal problem, where increased safety strictness compromises the agent's utility on benign tasks. To mitigate this trade-off, we propose SafeHarbor, a novel framework designed to establish precise decision boundaries for LLM agents. Unlike static guidelines, SafeHarbor extracts context-aware defense rules through enhanced adversarial generation. We design a local hierarchical memory system for dynamic rule injection, offering a training-free, efficient, and plug-and-play solution. Furthermore, we introduce an information entropy-based self-evolution mechanism that continuously optimizes the memory structure through dynamic node splitting and merging. Extensive experiments demonstrate that SafeHarbor achieves state-of-the-art performance on both ambiguous benign tasks and explicit malicious attacks, notably attaining a peak benign utility of 63.6\% on GPT-4o while maintaining a robust refusal rate exceeding 93\% against harmful requests. The source code is publicly available at https://github.com/ljj-cyber/SafeHarbor.
Community
With the rapid evolution of foundation models, Large Language Model (LLM) agents have demonstrated increasingly powerful tool-use capabilities. However, this proficiency introduces significant security risks, as malicious actors can manipulate agents into executing tools to generate harmful content. While existing defensive mechanisms are effective, they frequently suffer from the over-refusal problem, where increased safety strictness compromises the agent's utility on benign tasks. To mitigate this trade-off, we propose SafeHarbor, a novel framework designed to establish precise decision boundaries for LLM agents. Unlike static guidelines, SafeHarbor extracts context-aware defense rules through enhanced adversarial generation. We design a local hierarchical memory system for dynamic rule injection, offering a training-free, efficient, and plug-and-play solution. Furthermore, we introduce an information entropy-based self-evolution mechanism that continuously optimizes the memory structure through dynamic node splitting and merging. Extensive experiments demonstrate that SafeHarbor achieves state-of-the-art performance on both ambiguous benign tasks and explicit malicious attacks, notably attaining a peak benign utility of 63.6% on GPT-4o while maintaining a robust refusal rate exceeding 93% against harmful requests.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend
Upload images, audio, and videos by dragging in the text input, pasting, or clicking here.
Tap or paste here to upload images
Cite arxiv.org/abs/2605.05704 in a model README.md to link it from this page.
Cite arxiv.org/abs/2605.05704 in a dataset README.md to link it from this page.
Cite arxiv.org/abs/2605.05704 in a Space README.md to link it from this page.
Discussion (0)
Sign in to join the discussion. Free account, 30 seconds — email code or GitHub.
Sign in →No comments yet. Sign in and be the first to say something.