TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Solving Inequality Proofs with Large Language Models

Solving Inequality Proofs with Large Language Models

Jiayi Sheng, Luna Lyu, Jikai Jin, Tony Xia, Alex Gu, James Zou, Pan Lu

2025-06-09Mathematical Problem-SolvingRelation Prediction
PaperPDFCode(official)Code

Abstract

Inequality proving, crucial across diverse scientific and mathematical fields, tests advanced reasoning skills such as discovering tight bounds and strategic theorem application. This makes it a distinct, demanding frontier for large language models (LLMs), offering insights beyond general mathematical problem-solving. Progress in this area is hampered by existing datasets that are often scarce, synthetic, or rigidly formal. We address this by proposing an informal yet verifiable task formulation, recasting inequality proving into two automatically checkable subtasks: bound estimation and relation prediction. Building on this, we release IneqMath, an expert-curated dataset of Olympiad-level inequalities, including a test set and training corpus enriched with step-wise solutions and theorem annotations. We also develop a novel LLM-as-judge evaluation framework, combining a final-answer judge with four step-wise judges designed to detect common reasoning flaws. A systematic evaluation of 29 leading LLMs on IneqMath reveals a surprising reality: even top models like o1 achieve less than 10% overall accuracy under step-wise scrutiny; this is a drop of up to 65.5% from their accuracy considering only final answer equivalence. This discrepancy exposes fragile deductive chains and a critical gap for current LLMs between merely finding an answer and constructing a rigorous proof. Scaling model size and increasing test-time computation yield limited gains in overall proof correctness. Instead, our findings highlight promising research directions such as theorem-guided reasoning and self-refinement. Code and data are available at https://ineqmath.github.io/.

Related Papers

SPADE: Spatial-Aware Denoising Network for Open-vocabulary Panoptic Scene Graph Generation with Long- and Local-range Context Reasoning2025-07-08EvoAgentX: An Automated Framework for Evolving Agentic Workflows2025-07-04LocationReasoner: Evaluating LLMs on Real-World Site Selection Reasoning2025-06-16TeleMath: A Benchmark for Large Language Models in Telecom Mathematical Problem Solving2025-06-12SwS: Self-aware Weakness-driven Problem Synthesis in Reinforcement Learning for LLM Reasoning2025-06-10Chain-of-Code Collapse: Reasoning Failures in LLMs via Adversarial Prompting in Code Generation2025-06-08MORSE-500: A Programmatically Controllable Video Benchmark to Stress-Test Multimodal Reasoning2025-06-05PoLAR: Polar-Decomposed Low-Rank Adapter Representation2025-06-03