Large Language Models

OdysseyArena: Benchmarking Large Language Models For Long-Horizon, Active and Inductive Interactions

FFangzhi XuHHang YanQQiushi SunJJinyang WuZZixian HuangMMuye HuangJJingyang GongZZichen DingKKanzhi ChengYYian WangXXinyu CheZZeyi SunJJian ZhangZZhangyue YinHHaoran LuoXXuanjing HuangBBen KaoJJun LiuQQika Lin
Published
February 5, 2026
Authors
19
Word Count
16,587
Code
Includes code

Benchmarking LLMs for inductive, long-horizon interactions.

Abstract

The rapid advancement of Large Language Models (LLMs) has catalyzed the development of autonomous agents capable of navigating complex environments. However, existing evaluations primarily adopt a deductive paradigm, where agents execute tasks based on explicitly provided rules and static goals, often within limited planning horizons. Crucially, this neglects the inductive necessity for agents to discover latent transition laws from experience autonomously, which is the cornerstone for enabling agentic foresight and sustaining strategic coherence. To bridge this gap, we introduce OdysseyArena, which re-centers agent evaluation on long-horizon, active, and inductive interactions. We formalize and instantiate four primitives, translating abstract transition dynamics into concrete interactive environments. Building upon this, we establish OdysseyArena-Lite for standardized benchmarking, providing a set of 120 tasks to measure an agent's inductive efficiency and long-horizon discovery. Pushing further, we introduce OdysseyArena-Challenge to stress-test agent stability across extreme interaction horizons (e.g., > 200 steps). Extensive experiments on 15+ leading LLMs reveal that even frontier models exhibit a deficiency in inductive scenarios, identifying a critical bottleneck in the pursuit of autonomous discovery in complex environments. Our code and data are available at https://github.com/xufangzhi/Odyssey-Arena

Key Takeaways

  • 1

    New benchmark evaluates LLMs on inductive reasoning.

  • 2

    Focuses on long-horizon, active interactions.

  • 3

    Introduces diverse environments for comprehensive evaluation.

Limitations

  • Limited to four specific environments.

  • May not cover all possible real-world scenarios.

Keywords

Large Language Modelsautonomous agentsinductive reasoninglong-horizon planningagent evaluationtransition lawsOdysseyArenaOdysseyArena-LiteOdysseyArena-Challenge

More in Large Language Models

View all
OdysseyArena: Benchmarking Large Language Models For Long-Horizon, Active and Inductive Interactions | Paperchime