TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Large Language Models are Versatile Decomposers: Decompose...

Large Language Models are Versatile Decomposers: Decompose Evidence and Questions for Table-based Reasoning

Yunhu Ye, Binyuan Hui, Min Yang, Binhua Li, Fei Huang, Yongbin Li

2023-01-31Semantic ParsingTable-based Fact VerificationHallucination
PaperPDFCodeCode(official)

Abstract

Table-based reasoning has shown remarkable progress in combining deep models with discrete reasoning, which requires reasoning over both free-form natural language (NL) questions and structured tabular data. However, previous table-based reasoning solutions usually suffer from significant performance degradation on huge evidence (tables). In addition, most existing methods struggle to reason over complex questions since the required information is scattered in different places. To alleviate the above challenges, we exploit large language models (LLMs) as decomposers for effective table-based reasoning, which (i) decompose huge evidence (a huge table) into sub-evidence (a small table) to mitigate the interference of useless information for table reasoning; and (ii) decompose complex questions into simpler sub-questions for text reasoning. Specifically, we first use the LLMs to break down the evidence (tables) involved in the current question, retaining the relevant evidence and excluding the remaining irrelevant evidence from the huge table. In addition, we propose a "parsing-execution-filling" strategy to alleviate the hallucination dilemma of the chain of thought by decoupling logic and numerical computation in each step. Extensive experiments show that our method can effectively leverage decomposed evidence and questions and outperforms the strong baselines on TabFact, WikiTableQuestion, and FetaQA datasets. Notably, our model outperforms human performance for the first time on the TabFact dataset.

Results

TaskDatasetMetricValueModel
Semantic ParsingWikiTableQuestionsAccuracy (Dev)64.8Dater
Semantic ParsingWikiTableQuestionsAccuracy (Test)65.9Dater
Table-based Fact VerificationTabFactTest93Dater

Related Papers

Mitigating Object Hallucinations via Sentence-Level Early Intervention2025-07-16ByDeWay: Boost Your multimodal LLM with DEpth prompting in a Training-Free Way2025-07-11UQLM: A Python Package for Uncertainty Quantification in Large Language Models2025-07-08DeepRetro: Retrosynthetic Pathway Discovery using Iterative LLM Reasoning2025-07-07ReLoop: "Seeing Twice and Thinking Backwards" via Closed-loop Training to Mitigate Hallucinations in Multimodal understanding2025-07-07The Future is Agentic: Definitions, Perspectives, and Open Challenges of Multi-Agent Recommender Systems2025-07-02GAF-Guard: An Agentic Framework for Risk Management and Governance in Large Language Models2025-07-01Where, What, Why: Towards Explainable Driver Attention Prediction2025-06-29