AI Safety & Alignment

BAPO: Boundary-Aware Policy Optimization for Reliable Agentic Search

SShiyu LiuYYongjing YinJJianhao YanYYunbo TangQQinggang ZhangBBei LiXXin ChenJJingang WangXXunliang CaiJJinsong Su
arXiv ID
2601.11037
Published
January 16, 2026
Authors
10
Hugging Face Likes
13
Comments
3

Abstract

RL-based agentic search enables LLMs to solve complex questions via dynamic planning and external search. While this approach significantly enhances accuracy with agent policies optimized via large-scale reinforcement learning, we identify a critical gap in reliability: these agents fail to recognize their reasoning boundaries and rarely admit ``I DON'T KNOW'' (IDK) even when evidence is insufficient or reasoning reaches its limit. The lack of reliability often leads to plausible but unreliable answers, introducing significant risks in many real-world scenarios. To this end, we propose Boundary-Aware Policy Optimization (BAPO), a novel RL framework designed to cultivate reliable boundary awareness without compromising accuracy. BAPO introduces two key components: (i) a group-based boundary-aware reward that encourages an IDK response only when the reasoning reaches its limit, and (ii) an adaptive reward modulator that strategically suspends this reward during early exploration, preventing the model from exploiting IDK as a shortcut. Extensive experiments on four benchmarks demonstrate that BAPO substantially enhances the overall reliability of agentic search.

Keywords

reinforcement learningagentic searchlarge language modelsboundary-aware policy optimizationgroup-based boundary-aware rewardadaptive reward modulatorI DON'T KNOW responsereasoning limitsreliability enhancement

More in AI Safety & Alignment

View all
BAPO: Boundary-Aware Policy Optimization for Reliable Agentic Search | Paperchime