Large Language Models

Humans and LLMs Diverge on Probabilistic Inferences

GGaurav KamathSSreenath MadathilSSebastian SchusterMMarie-Catherine de MarneffeSSiva Reddy
Published
February 26, 2026
Authors
5
Word Count
9,180

LLMs diverge from humans on probabilistic reasoning despite excelling at deterministic tasks.

Abstract

Human reasoning often involves working over limited information to arrive at probabilistic conclusions. In its simplest form, this involves making an inference that is not strictly entailed by a premise, but rather only likely given the premise. While reasoning LLMs have demonstrated strong performance on logical and mathematical tasks, their behavior on such open-ended, non-deterministic inferences remains largely unexplored. We introduce ProbCOPA, a dataset of 210 handcrafted probabilistic inferences in English, each annotated for inference likelihood by 25--30 human participants. We find that human responses are graded and varied, revealing probabilistic judgments of the inferences in our dataset. Comparing these judgments with responses from eight state-of-the-art reasoning LLMs, we show that models consistently fail to produce human-like distributions. Finally, analyzing LLM reasoning chains, we find evidence of a common reasoning pattern used to evaluate such inferences. Our findings reveal persistent differences between humans and LLMs, and underscore the need to evaluate reasoning beyond deterministic settings.

Key Takeaways

  • 1

    Humans make graded probabilistic judgments across a full scale, while LLMs cluster at extremes.

  • 2

    State-of-the-art reasoning LLMs fail to match human judgment distributions on uncertain inferences.

  • 3

    Current LLM evaluation focuses on deterministic tasks, missing everyday probabilistic reasoning.

Limitations

  • Dataset contains only 210 handcrafted inferences, limiting generalizability to broader reasoning domains.

  • Study analyzes only English-language probabilistic inferences, unclear if findings apply cross-linguistically.

Keywords

probabilistic inferencesreasoning LLMslogical reasoningmathematical taskshuman-like distributionsreasoning chains

More in Large Language Models

View all
Humans and LLMs Diverge on Probabilistic Inferences | Paperchime