A study of 20 LLMs reveals clustering into rational reasoning models (RMs), insensitive to framing and order, and less rational conversational models (CMs) with human-like biases and a description-history gap. RMs match rational agents across explicit and experience-based prospects. Mathematical reasoning training differentiates RMs from CMs.
Key Points
- 1.LLMs cluster into reasoning models (RMs) and conversational models (CMs)
- 2.RMs rational, ignore prospect order, framing, explanations
- 3.CMs show human-like sensitivity, large description-history gap
- 4.Study includes 20 LLMs, human experiment, rational agent baseline
- 5.Math reasoning training key RM-CM differentiator
Impact Analysis
Highlights need for reasoning-focused training to improve LLM reliability in uncertain decisions. Helps practitioners choose models for agentic workflows avoiding CM biases.
Technical Details
Compares prospect representation (explicit vs. experience) and decision rationale (explanation). Uses paired open LLM comparisons and human benchmarks.