TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Classification-Regression for Chart Comprehension

Classification-Regression for Chart Comprehension

Matan Levy, Rami Ben-Ari, Dani Lischinski

2021-11-29Question AnsweringData VisualizationregressionChart Question AnsweringClassificationVisual Question Answering (VQA)
PaperPDFCode(official)

Abstract

Chart question answering (CQA) is a task used for assessing chart comprehension, which is fundamentally different from understanding natural images. CQA requires analyzing the relationships between the textual and the visual components of a chart, in order to answer general questions or infer numerical values. Most existing CQA datasets and models are based on simplifying assumptions that often enable surpassing human performance. In this work, we address this outcome and propose a new model that jointly learns classification and regression. Our language-vision setup uses co-attention transformers to capture the complex real-world interactions between the question and the textual elements. We validate our design with extensive experiments on the realistic PlotQA dataset, outperforming previous approaches by a large margin, while showing competitive performance on FigureQA. Our model is particularly well suited for realistic questions with out-of-vocabulary answers that require regression.

Results

TaskDatasetMetricValueModel
Visual Question Answering (VQA)PlotQA-D11:1 Accuracy76.94CRCT
Visual Question Answering (VQA)PlotQA-D21:1 Accuracy34.44CRCT
Visual Question Answering (VQA)FigureQA - test 11:1 Accuracy94.23CRCT
Visual Question Answering (VQA)PlotQA1:1 Accuracy55.7CRCT
Chart Question AnsweringPlotQA1:1 Accuracy55.7CRCT

Related Papers

Language Integration in Fine-Tuning Multimodal Large Language Models for Image-Based Regression2025-07-20From Roots to Rewards: Dynamic Tree Reasoning with RL2025-07-17Enter the Mind Palace: Reasoning and Planning for Long-term Active Embodied Question Answering2025-07-17Vision-and-Language Training Helps Deploy Taxonomic Knowledge but Does Not Fundamentally Alter It2025-07-17City-VLM: Towards Multidomain Perception Scene Understanding via Multimodal Incomplete Learning2025-07-17Adversarial attacks to image classification systems using evolutionary algorithms2025-07-17VisionThink: Smart and Efficient Vision Language Model via Reinforcement Learning2025-07-17Describe Anything Model for Visual Question Answering on Text-rich Images2025-07-16