SLRNet: A Real-Time LSTM-Based Sign Language Recognition System
Sharvari Kamble
Abstract
Sign Language Recognition (SLR) plays a crucial role in bridging the communication gap between the hearing-impaired community and society. This paper introduces SLRNet, a real-time webcam-based ASL recognition system using MediaPipe Holistic and Long Short-Term Memory (LSTM) networks. The model processes video streams to recognize both ASL alphabet letters and functional words. With a validation accuracy of 86.7%, SLRNet demonstrates the feasibility of inclusive, hardware-independent gesture recognition.
Related Papers
Efficient Deployment of Spiking Neural Networks on SpiNNaker2 for DVS Gesture Recognition Using Neuromorphic Intermediate Representation2025-09-04Visual Hand Gesture Recognition with Deep Learning: A Comprehensive Review of Methods, Datasets, Challenges and Future Research Directions2025-07-06Hierarchical Sub-action Tree for Continuous Sign Language Recognition2025-06-26How do Foundation Models Compare to Skeleton-Based Approaches for Gesture Recognition in Human-Robot Interaction?2025-06-25Wi-Fi Sensing Tool Release: Gathering 802.11ax Channel State Information from a Commercial Wi-Fi Access Point2025-06-20Accessible Gesture-Driven Augmented Reality Interaction System2025-06-18SignBart -- New approach with the skeleton sequence for Isolated Sign language Recognition2025-06-18Diver-Robot Communication Dataset for Underwater Hand Gesture Recognition2025-06-10