TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/XDoc: Unified Pre-training for Cross-Format Document Under...

XDoc: Unified Pre-training for Cross-Format Document Understanding

Jingye Chen, Tengchao Lv, Lei Cui, Cha Zhang, Furu Wei

2022-10-06document understandingSemantic entity labeling
PaperPDFCode(official)

Abstract

The surge of pre-training has witnessed the rapid development of document understanding recently. Pre-training and fine-tuning framework has been effectively used to tackle texts in various formats, including plain texts, document texts, and web texts. Despite achieving promising performance, existing pre-trained models usually target one specific document format at one time, making it difficult to combine knowledge from multiple document formats. To address this, we propose XDoc, a unified pre-trained model which deals with different document formats in a single model. For parameter efficiency, we share backbone parameters for different formats such as the word embedding layer and the Transformer layers. Meanwhile, we introduce adaptive layers with lightweight parameters to enhance the distinction across different formats. Experimental results have demonstrated that with only 36.7% parameters, XDoc achieves comparable or even better performance on a variety of downstream tasks compared with the individual pre-trained models, which is cost effective for real-world deployment. The code and pre-trained models will be publicly available at \url{https://aka.ms/xdoc}.

Results

TaskDatasetMetricValueModel
Semantic entity labelingFUNSDF189.4XDoc1M

Related Papers

A Survey on MLLM-based Visually Rich Document Understanding: Methods, Challenges, and Emerging Trends2025-07-14PaddleOCR 3.0 Technical Report2025-07-08GLM-4.1V-Thinking: Towards Versatile Multimodal Reasoning with Scalable Reinforcement Learning2025-07-01Class-Agnostic Region-of-Interest Matching in Document Images2025-06-26DrishtiKon: Multi-Granular Visual Grounding for Text-Rich Document Images2025-06-26Seeing is Believing? Mitigating OCR Hallucinations in Multimodal Large Language Models2025-06-25PP-DocBee2: Improved Baselines with Efficient Data for Multimodal Document Understanding2025-06-22WikiMixQA: A Multimodal Benchmark for Question Answering over Tables and Charts2025-06-18