AI Agents

Scaling Agentic Capabilities, Not Context: Efficient Reinforcement Finetuning for Large Toolspaces

KKaran GuptaPPranav VajreshwariYYash PandyaRRaghav MagazineAAkshay NambiAAhmed Awadallah
Published
March 5, 2026
Authors
6
Word Count
14,003
Code
Includes code

Small models match frontier agents by learning smart context control and structured execution instead of pure scale.

Abstract

Agentic systems operating over large tool ecosystems must plan and execute long-horizon workflows under weak or non-verifiable supervision. While frontier models mitigate these challenges through scale and large context budgets, small language models (SLMs) remain brittle: eager tool loading saturates context, execution errors compound over time, and sparse rewards limit learning. We introduce ATLAS, a reinforcement finetuning framework that enables SLMs to operate effectively in large-scale toolspace environments by learning how to acquire context and how to execute actions. Our approach makes two key contributions. First, we treat context control and execution structure as learnable decisions, combining iterative tool loading with programmatic tool orchestration to bound context growth and stabilize long-horizon trajectories. Second, we propose rubric-based reinforcement finetuning, which decomposes task success into structured, task-aligned criteria and enables scalable training using small judge models. Across MCP benchmarks, these design choices yield large and consistent gains over generic RL baselines, allowing a 4B SLM to approach frontier-agent performance under far tighter parameter and context budgets.

Key Takeaways

  • 1

    Small language models can match frontier agent performance by learning context acquisition and execution structure rather than relying on scale.

  • 2

    Rubric-based reinforcement finetuning provides richer learning signals than outcome-only rewards for tasks with non-verifiable objectives.

  • 3

    Programmatic tool orchestration combined with iterative tool loading reduces context growth while improving long-horizon task execution stability.

Limitations

  • Evaluation limited to synthetic MCP tasks; real-world deployment performance on diverse production environments remains untested.

  • Rubric generation requires initial frontier model investment; scalability gains depend on rubric reusability across similar task distributions.

Keywords

reinforcement fine-tuningcontext controlexecution structuretool orchestrationprogrammatic tool orchestrationrubric-based reinforcement fine-tuningtask-aligned criteriaagent performanceparameter efficiencycontext budget

More in AI Agents

View all
Scaling Agentic Capabilities, Not Context: Efficient Reinforcement Finetuning for Large Toolspaces | Paperchime