TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Skeleton Aware Multi-modal Sign Language Recognition

Skeleton Aware Multi-modal Sign Language Recognition

Songyao Jiang, Bin Sun, Lichen Wang, Yue Bai, Kunpeng Li, Yun Fu

2021-03-16Skeleton Based Action RecognitionSign Language Recognition
PaperPDFCodeCodeCode(official)

Abstract

Sign language is commonly used by deaf or speech impaired people to communicate but requires significant effort to master. Sign Language Recognition (SLR) aims to bridge the gap between sign language users and others by recognizing signs from given videos. It is an essential yet challenging task since sign language is performed with the fast and complex movement of hand gestures, body posture, and even facial expressions. Recently, skeleton-based action recognition attracts increasing attention due to the independence between the subject and background variation. However, skeleton-based SLR is still under exploration due to the lack of annotations on hand keypoints. Some efforts have been made to use hand detectors with pose estimators to extract hand key points and learn to recognize sign language via Neural Networks, but none of them outperforms RGB-based methods. To this end, we propose a novel Skeleton Aware Multi-modal SLR framework (SAM-SLR) to take advantage of multi-modal information towards a higher recognition rate. Specifically, we propose a Sign Language Graph Convolution Network (SL-GCN) to model the embedded dynamics and a novel Separable Spatial-Temporal Convolution Network (SSTCN) to exploit skeleton features. RGB and depth modalities are also incorporated and assembled into our framework to provide global information that is complementary to the skeleton-based methods SL-GCN and SSTCN. As a result, SAM-SLR achieves the highest performance in both RGB (98.42\%) and RGB-D (98.53\%) tracks in 2021 Looking at People Large Scale Signer Independent Isolated SLR Challenge. Our code is available at https://github.com/jackyjsy/CVPR21Chal-SLR

Results

TaskDatasetMetricValueModel
Sign Language RecognitionAUTSLRank-1 Recognition Rate0.9853SAM-SLR (RGB-D)
Sign Language RecognitionWLASL-2000Top-1 Accuracy58.73SAM-SLR

Related Papers

Zero-shot Skeleton-based Action Recognition with Prototype-guided Feature Alignment2025-07-01Hierarchical Sub-action Tree for Continuous Sign Language Recognition2025-06-26Including Semantic Information via Word Embeddings for Skeleton-based Action Recognition2025-06-23SignBart -- New approach with the skeleton sequence for Isolated Sign language Recognition2025-06-18SLRNet: A Real-Time LSTM-Based Sign Language Recognition System2025-06-11Fine-Tuning Video Transformers for Word-Level Bangla Sign Language: A Comparative Analysis for Classification Tasks2025-06-043D Skeleton-Based Action Recognition: A Review2025-06-01Spatio-Temporal Joint Density Driven Learning for Skeleton-Based Action Recognition2025-05-29