AI Agents

Calibrate-Then-Act: Cost-Aware Exploration in LLM Agents

WWenxuan DingNNicholas TomlinGGreg Durrett
Published
February 18, 2026
Authors
3
Word Count
13,602
Code
Includes code

Calibrate-Then-Act framework helps LLM agents make optimal exploration decisions by providing explicit cost and probability information.

Abstract

LLMs are increasingly being used for complex problems which are not necessarily resolved in a single response, but require interacting with an environment to acquire information. In these scenarios, LLMs must reason about inherent cost-uncertainty tradeoffs in when to stop exploring and commit to an answer. For instance, on a programming task, an LLM should test a generated code snippet if it is uncertain about the correctness of that code; the cost of writing a test is nonzero, but typically lower than the cost of making a mistake. In this work, we show that we can induce LLMs to explicitly reason about balancing these cost-uncertainty tradeoffs, then perform more optimal environment exploration. We formalize multiple tasks, including information retrieval and coding, as sequential decision-making problems under uncertainty. Each problem has latent environment state that can be reasoned about via a prior which is passed to the LLM agent. We introduce a framework called Calibrate-Then-Act (CTA), where we feed the LLM this additional context to enable it to act more optimally. This improvement is preserved even under RL training of both the baseline and CTA. Our results on information-seeking QA and on a simplified coding task show that making cost-benefit tradeoffs explicit with CTA can help agents discover more optimal decision-making strategies.

Key Takeaways

  • 1

    LLM agents cannot naturally learn optimal exploration-exploitation tradeoffs through training alone.

  • 2

    Providing models with explicit probability and cost information enables genuinely optimal decision-making.

  • 3

    Current agentic systems use static exploration patterns regardless of uncertainty or contextual costs.

Limitations

  • LLMs struggle with end-to-end learning of cost-aware exploration without explicit guidance.

  • Static exploration strategies waste resources in some scenarios and gather insufficient information in others.

Keywords

large language modelssequential decision-makingcost-uncertainty tradeoffsenvironment explorationCalibrate-Then-Act frameworkinformation retrievalcoding taskslatent environment stateprior knowledgereinforcement learning

More in AI Agents

View all
Calibrate-Then-Act: Cost-Aware Exploration in LLM Agents | Paperchime