TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Generate & Rank: A Multi-task Framework for Math Word Prob...

Generate & Rank: A Multi-task Framework for Math Word Problems

Jianhao Shen, Yichun Yin, Lin Li, Lifeng Shang, Xin Jiang, Ming Zhang, Qun Liu

2021-09-07Findings (EMNLP) 2021 11MathMath Word Problem SolvingLanguage Modelling
PaperPDF

Abstract

Math word problem (MWP) is a challenging and critical task in natural language processing. Many recent studies formalize MWP as a generation task and have adopted sequence-to-sequence models to transform problem descriptions to mathematical expressions. However, mathematical expressions are prone to minor mistakes while the generation objective does not explicitly handle such mistakes. To address this limitation, we devise a new ranking task for MWP and propose Generate & Rank, a multi-task framework based on a generative pre-trained language model. By joint training with generation and ranking, the model learns from its own mistakes and is able to distinguish between correct and incorrect expressions. Meanwhile, we perform tree-based disturbance specially designed for MWP and an online update to boost the ranker. We demonstrate the effectiveness of our proposed method on the benchmark and the results show that our method consistently outperforms baselines in all datasets. Particularly, in the classical Math23k, our method is 7% (78.4% $\rightarrow$ 85.4%) higher than the state-of-the-art.

Results

TaskDatasetMetricValueModel
Question AnsweringMath23KAccuracy (5-fold)84.3Generate and Rank
Question AnsweringMath23KAccuracy (training-test)85.4Generate and Rank
Math Word Problem SolvingMath23KAccuracy (5-fold)84.3Generate and Rank
Math Word Problem SolvingMath23KAccuracy (training-test)85.4Generate and Rank
Mathematical Question AnsweringMath23KAccuracy (5-fold)84.3Generate and Rank
Mathematical Question AnsweringMath23KAccuracy (training-test)85.4Generate and Rank
Mathematical ReasoningMath23KAccuracy (5-fold)84.3Generate and Rank
Mathematical ReasoningMath23KAccuracy (training-test)85.4Generate and Rank

Related Papers

Visual-Language Model Knowledge Distillation Method for Image Quality Assessment2025-07-21VAR-MATH: Probing True Mathematical Reasoning in Large Language Models via Symbolic Multi-Instance Benchmarks2025-07-17QuestA: Expanding Reasoning Capacity in LLMs via Question Augmentation2025-07-17Making Language Model a Hierarchical Classifier and Generator2025-07-17VisionThink: Smart and Efficient Vision Language Model via Reinforcement Learning2025-07-17The Generative Energy Arena (GEA): Incorporating Energy Awareness in Large Language Model (LLM) Human Evaluations2025-07-17Inverse Reinforcement Learning Meets Large Language Model Post-Training: Basics, Advances, and Opportunities2025-07-17Scaling Up RL: Unlocking Diverse Reasoning in LLMs via Prolonged Training2025-07-16