TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Improving the Performance of Unimodal Dynamic Hand-Gesture...

Improving the Performance of Unimodal Dynamic Hand-Gesture Recognition with Multimodal Training

Mahdi Abavisani, Hamid Reza Vaezi Joze, Vishal M. Patel

2018-12-14CVPR 2019 6Transfer LearningGesture RecognitionHand Gesture RecognitionAction RecognitionHand-Gesture Recognition
PaperPDFCode

Abstract

We present an efficient approach for leveraging the knowledge from multiple modalities in training unimodal 3D convolutional neural networks (3D-CNNs) for the task of dynamic hand gesture recognition. Instead of explicitly combining multimodal information, which is commonplace in many state-of-the-art methods, we propose a different framework in which we embed the knowledge of multiple modalities in individual networks so that each unimodal network can achieve an improved performance. In particular, we dedicate separate networks per available modality and enforce them to collaborate and learn to develop networks with common semantics and better representations. We introduce a "spatiotemporal semantic alignment" loss (SSA) to align the content of the features from different networks. In addition, we regularize this loss with our proposed "focal regularization parameter" to avoid negative knowledge transfer. Experimental results show that our framework improves the test time recognition accuracy of unimodal networks, and provides the state-of-the-art performance on various dynamic hand gesture recognition datasets.

Results

TaskDatasetMetricValueModel
HandNVGestureAccuracy86.93MTUT
HandEgoGestureAccuracy93.87MTUT
HandVIVA Hand Gestures DatasetAccuracy86.08MTUT
Gesture RecognitionNVGestureAccuracy86.93MTUT
Gesture RecognitionEgoGestureAccuracy93.87MTUT
Gesture RecognitionVIVA Hand Gestures DatasetAccuracy86.08MTUT

Related Papers

Efficient Deployment of Spiking Neural Networks on SpiNNaker2 for DVS Gesture Recognition Using Neuromorphic Intermediate Representation2025-09-04RaMen: Multi-Strategy Multi-Modal Learning for Bundle Construction2025-07-18Disentangling coincident cell events using deep transfer learning and compressive sensing2025-07-17A Real-Time System for Egocentric Hand-Object Interaction Detection in Industrial Domains2025-07-17Best Practices for Large-Scale, Pixel-Wise Crop Mapping and Transfer Learning Workflows2025-07-16Robust-Multi-Task Gradient Boosting2025-07-15Calibrated and Robust Foundation Models for Vision-Language and Medical Image Tasks Under Distribution Shift2025-07-12The Bayesian Approach to Continual Learning: An Overview2025-07-11