TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/LiLT: A Simple yet Effective Language-Independent Layout T...

LiLT: A Simple yet Effective Language-Independent Layout Transformer for Structured Document Understanding

Jiapeng Wang, Lianwen Jin, Kai Ding

2022-02-28ACL 2022 5document understandingSemantic entity labelingDocument Image ClassificationKey Information ExtractionKey-value Pair Extraction
PaperPDFCodeCodeCodeCodeCode(official)

Abstract

Structured document understanding has attracted considerable attention and made significant progress recently, owing to its crucial role in intelligent document processing. However, most existing related models can only deal with the document data of specific language(s) (typically English) included in the pre-training collection, which is extremely limited. To address this issue, we propose a simple yet effective Language-independent Layout Transformer (LiLT) for structured document understanding. LiLT can be pre-trained on the structured documents of a single language and then directly fine-tuned on other languages with the corresponding off-the-shelf monolingual/multilingual pre-trained textual models. Experimental results on eight languages have shown that LiLT can achieve competitive or even superior performance on diverse widely-used downstream benchmarks, which enables language-independent benefit from the pre-training of document layout structure. Code and model are publicly available at https://github.com/jpWang/LiLT.

Results

TaskDatasetMetricValueModel
Semantic entity labelingFUNSDF188.41LILT
Key Information ExtractionCORDF196.07LILT
Key Information ExtractionRFUND-ENkey-value pair F154.33LiLT ([EN-R]_base)
Key Information ExtractionRFUND-ENkey-value pair F152.18LiLT ([InfoXLM]_base)
Key Information ExtractionSIBRkey-value pair F172.76LiLT ([InfoXLM]_base)

Related Papers

A Survey on MLLM-based Visually Rich Document Understanding: Methods, Challenges, and Emerging Trends2025-07-14PaddleOCR 3.0 Technical Report2025-07-08GLM-4.1V-Thinking: Towards Versatile Multimodal Reasoning with Scalable Reinforcement Learning2025-07-01Class-Agnostic Region-of-Interest Matching in Document Images2025-06-26DrishtiKon: Multi-Granular Visual Grounding for Text-Rich Document Images2025-06-26Seeing is Believing? Mitigating OCR Hallucinations in Multimodal Large Language Models2025-06-25PP-DocBee2: Improved Baselines with Efficient Data for Multimodal Document Understanding2025-06-22WikiMixQA: A Multimodal Benchmark for Question Answering over Tables and Charts2025-06-18