TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/IoT-Based Real-Time Medical-Related Human Activity Recogni...

IoT-Based Real-Time Medical-Related Human Activity Recognition Using Skeletons and Multi-Stage Deep Learning for Healthcare

Subrata Kumer Paul, Abu Saleh Musa Miah, Rakhi Rani Paul, Md. Ekramul Hamid, Jungpil Shin, Md Abdur Rahim

2025-01-13Skeleton Based Action RecognitionHuman Activity RecognitionActivity Recognition
PaperPDFCode

Abstract

The Internet of Things (IoT) and mobile technology have significantly transformed healthcare by enabling real-time monitoring and diagnosis of patients. Recognizing medical-related human activities (MRHA) is pivotal for healthcare systems, particularly for identifying actions that are critical to patient well-being. However, challenges such as high computational demands, low accuracy, and limited adaptability persist in Human Motion Recognition (HMR). While some studies have integrated HMR with IoT for real-time healthcare applications, limited research has focused on recognizing MRHA as essential for effective patient monitoring. This study proposes a novel HMR method for MRHA detection, leveraging multi-stage deep learning techniques integrated with IoT. The approach employs EfficientNet to extract optimized spatial features from skeleton frame sequences using seven Mobile Inverted Bottleneck Convolutions (MBConv) blocks, followed by ConvLSTM to capture spatio-temporal patterns. A classification module with global average pooling, a fully connected layer, and a dropout layer generates the final predictions. The model is evaluated on the NTU RGB+D 120 and HMDB51 datasets, focusing on MRHA, such as sneezing, falling, walking, sitting, etc. It achieves 94.85% accuracy for cross-subject evaluations and 96.45% for cross-view evaluations on NTU RGB+D 120, along with 89.00% accuracy on HMDB51. Additionally, the system integrates IoT capabilities using a Raspberry Pi and GSM module, delivering real-time alerts via Twilios SMS service to caregivers and patients. This scalable and efficient solution bridges the gap between HMR and IoT, advancing patient monitoring, improving healthcare outcomes, and reducing costs.

Results

TaskDatasetMetricValueModel
VideoHMDB51Average accuracy of 3 splits89.22EfficientNetB0ConvLSTM
Temporal Action LocalizationHMDB51Average accuracy of 3 splits89.22EfficientNetB0ConvLSTM
Zero-Shot LearningHMDB51Average accuracy of 3 splits89.22EfficientNetB0ConvLSTM
Activity RecognitionHMDB51Average accuracy of 3 splits89.22EfficientNetB0ConvLSTM
Action LocalizationHMDB51Average accuracy of 3 splits89.22EfficientNetB0ConvLSTM
Action DetectionHMDB51Average accuracy of 3 splits89.22EfficientNetB0ConvLSTM
3D Action RecognitionHMDB51Average accuracy of 3 splits89.22EfficientNetB0ConvLSTM
Action RecognitionHMDB51Average accuracy of 3 splits89.22EfficientNetB0ConvLSTM

Related Papers

ZKP-FedEval: Verifiable and Privacy-Preserving Federated Evaluation using Zero-Knowledge Proofs2025-07-15Zero-shot Skeleton-based Action Recognition with Prototype-guided Feature Alignment2025-07-01SEZ-HARN: Self-Explainable Zero-shot Human Activity Recognition Network2025-06-25Including Semantic Information via Word Embeddings for Skeleton-based Action Recognition2025-06-23Efficient Retail Video Annotation: A Robust Key Frame Generation Approach for Product and Customer Interaction Analysis2025-06-17DeSPITE: Exploring Contrastive Deep Skeleton-Pointcloud-IMU-Text Embeddings for Advanced Point Cloud Human Activity Understanding2025-06-16MORIC: CSI Delay-Doppler Decomposition for Robust Wi-Fi-based Human Activity Recognition2025-06-15AgentSense: Virtual Sensor Data Generation Using LLM Agents in Simulated Home Environments2025-06-13