TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/LGPMA: Complicated Table Structure Recognition with Local ...

LGPMA: Complicated Table Structure Recognition with Local and Global Pyramid Mask Alignment

Liang Qiao, Zaisheng Li, Zhanzhan Cheng, Peng Zhang, ShiLiang Pu, Yi Niu, Wenqi Ren, Wenming Tan, Fei Wu

2021-05-13Table Recognition
PaperPDFCodeCode(official)

Abstract

Table structure recognition is a challenging task due to the various structures and complicated cell spanning relations. Previous methods handled the problem starting from elements in different granularities (rows/columns, text regions), which somehow fell into the issues like lossy heuristic rules or neglect of empty cell division. Based on table structure characteristics, we find that obtaining the aligned bounding boxes of text region can effectively maintain the entire relevant range of different cells. However, the aligned bounding boxes are hard to be accurately predicted due to the visual ambiguities. In this paper, we aim to obtain more reliable aligned bounding boxes by fully utilizing the visual information from both text regions in proposed local features and cell relations in global features. Specifically, we propose the framework of Local and Global Pyramid Mask Alignment, which adopts the soft pyramid mask learning mechanism in both the local and global feature maps. It allows the predicted boundaries of bounding boxes to break through the limitation of original proposals. A pyramid mask re-scoring module is then integrated to compromise the local and global information and refine the predicted boundaries. Finally, we propose a robust table structure recovery pipeline to obtain the final structure, in which we also effectively solve the problems of empty cells locating and division. Experimental results show that the proposed method achieves competitive and even new state-of-the-art performance on several public benchmarks.

Results

TaskDatasetMetricValueModel
Table RecognitionPubTabNetTEDS (all samples)94.6LGPMA
Table RecognitionPubTabNetTEDS-Struct96.7LGPMA

Related Papers

Benchmarking Multimodal LLMs on Recognition and Understanding over Chemical Tables2025-06-13OmniParser V2: Structured-Points-of-Thought for Unified Visual Text Parsing and Its Generality to Multimodal Large Language Models2025-02-22Enhancing Table Recognition with Vision LLMs: A Benchmark and Neighbor-Guided Toolchain Reasoner2024-12-30Benchmarking Table Comprehension In The Wild2024-12-13See then Tell: Enhancing Key Information Extraction with Vision Grounding2024-09-29PdfTable: A Unified Toolkit for Deep Learning-Based Table Extraction2024-09-08VRDSynth: Synthesizing Programs for Multilingual Visually Rich Document Information Extraction2024-07-09The Socface Project: Large-Scale Collection, Processing, and Analysis of a Century of French Censuses2024-04-29