Large Language Models

Large Language Model Reasoning Failures

PPeiyang SongPPengrui HanNNoah Goodman
Published
February 5, 2026
Authors
3
Word Count
31,420

LLMs face significant, yet overlooked, reasoning failures.

Abstract

Large Language Models (LLMs) have exhibited remarkable reasoning capabilities, achieving impressive results across a wide range of tasks. Despite these advances, significant reasoning failures persist, occurring even in seemingly simple scenarios. To systematically understand and address these shortcomings, we present the first comprehensive survey dedicated to reasoning failures in LLMs. We introduce a novel categorization framework that distinguishes reasoning into embodied and non-embodied types, with the latter further subdivided into informal (intuitive) and formal (logical) reasoning. In parallel, we classify reasoning failures along a complementary axis into three types: fundamental failures intrinsic to LLM architectures that broadly affect downstream tasks; application-specific limitations that manifest in particular domains; and robustness issues characterized by inconsistent performance across minor variations. For each reasoning failure, we provide a clear definition, analyze existing studies, explore root causes, and present mitigation strategies. By unifying fragmented research efforts, our survey provides a structured perspective on systemic weaknesses in LLM reasoning, offering valuable insights and guiding future research towards building stronger, more reliable, and robust reasoning capabilities. We additionally release a comprehensive collection of research works on LLM reasoning failures, as a GitHub repository at https://github.com/Peiyang-Song/Awesome-LLM-Reasoning-Failures, to provide an easy entry point to this area.

Key Takeaways

  • 1

    LLMs struggle with simple reasoning despite high performance.

  • 2

    Informal reasoning failures mirror human cognitive biases.

  • 3

    LLMs exhibit unstable Theory of Mind reasoning.

Limitations

  • Fundamental failures intrinsic to LLM architectures.

  • Inconsistent performance across minor variations.

Keywords

large language modelsreasoning capabilitiesreasoning failuresembodied reasoningnon-embodied reasoninginformal reasoningformal reasoningfundamental failuresapplication-specific limitationsrobustness issues

More in Large Language Models

View all
Large Language Model Reasoning Failures | Paperchime