Zecheng Li, Wengang Zhou, Weichao Zhao, Kepeng Wu, Hezhen Hu, Houqiang Li
Sign language pre-training has gained increasing attention for its ability to enhance performance across various sign language understanding (SLU) tasks. However, existing methods often suffer from a gap between pre-training and fine-tuning, leading to suboptimal results. To address this, we propose Uni-Sign, a unified pre-training framework that eliminates the gap between pre-training and downstream SLU tasks through a large-scale generative pre-training strategy and a novel fine-tuning paradigm. First, we introduce CSL-News, a large-scale Chinese Sign Language (CSL) dataset containing 1,985 hours of video paired with textual annotations, which enables effective large-scale pre-training. Second, Uni-Sign unifies SLU tasks by treating downstream tasks as a single sign language translation (SLT) task during fine-tuning, ensuring seamless knowledge transfer between pre-training and fine-tuning. Furthermore, we incorporate a prior-guided fusion (PGF) module and a score-aware sampling strategy to efficiently fuse pose and RGB information, addressing keypoint inaccuracies and improving computational efficiency. Extensive experiments across multiple SLU benchmarks demonstrate that Uni-Sign achieves state-of-the-art performance across multiple downstream SLU tasks. Dataset and code are available at github.com/ZechengLi19/Uni-Sign.
| Task | Dataset | Metric | Value | Model |
|---|---|---|---|---|
| Sign Language Recognition | WLASL100 | Top-1 Accuracy | 92.25 | Uni-Sign |
| Sign Language Recognition | CSL-Daily | Word Error Rate (WER) | 26 | Uni-Sign |
| Sign Language Recognition | MSASL-1000 | P-C Top-1 Accuracy | 76.97 | Uni-Sign |
| Sign Language Recognition | MSASL-1000 | P-I Top-1 Accuracy | 78.16 | Uni-Sign |
| Sign Language Recognition | WLASL-2000 | Top-1 Accuracy | 63.52 | Uni-Sign |