AI Agents

Code2Math: Can Your Code Agent Effectively Evolve Math Problems Through Exploration?

DDadi GuoYYuejin XieQQingyu LiuJJiayu LiuZZhiyuan FanQQihan RenSShuai ShaoTTianyi ZhouDDongrui LiuYYi R. Fung
Published
March 3, 2026
Authors
10

Abstract

As large language models (LLMs) advance their mathematical capabilities toward the IMO level, the scarcity of challenging, high-quality problems for training and evaluation has become a significant bottleneck. Simultaneously, recent code agents have demonstrated sophisticated skills in agentic coding and reasoning, suggesting that code execution can serve as a scalable environment for mathematical experimentation. In this paper, we investigate the potential of code agents to autonomously evolve existing math problems into more complex variations. We introduce a multi-agent framework designed to perform problem evolution while validating the solvability and increased difficulty of the generated problems. Our experiments demonstrate that, given sufficient test-time exploration, code agents can synthesize new, solvable problems that are structurally distinct from and more challenging than the originals. This work provides empirical evidence that code-driven agents can serve as a viable mechanism for synthesizing high-difficulty mathematical reasoning problems within scalable computational environments. Our data is available at https://github.com/TarferSoul/Code2Math.

Keywords

large language modelsmathematical capabilitiescode agentsagentic codingreasoningproblem evolutionsolvabilitycomputational environments

More in AI Agents

View all
Code2Math: Can Your Code Agent Effectively Evolve Math Problems Through Exploration? | Paperchime