TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Adaptive Feature Processing for Robust Human Activity Reco...

Adaptive Feature Processing for Robust Human Activity Recognition on a Novel Multi-Modal Dataset

Mirco Moencks, Varuna De Silva, Jamie Roche, Ahmet Kondoz

2019-01-09Autonomous VehiclesSports AnalyticsHuman Activity RecognitionMultimodal Activity RecognitionBIG-bench Machine LearningActivity Recognition
PaperPDF

Abstract

Human Activity Recognition (HAR) is a key building block of many emerging applications such as intelligent mobility, sports analytics, ambient-assisted living and human-robot interaction. With robust HAR, systems will become more human-aware, leading towards much safer and empathetic autonomous systems. While human pose detection has made significant progress with the dawn of deep convolutional neural networks (CNNs), the state-of-the-art research has almost exclusively focused on a single sensing modality, especially video. However, in safety critical applications it is imperative to utilize multiple sensor modalities for robust operation. To exploit the benefits of state-of-the-art machine learning techniques for HAR, it is extremely important to have multimodal datasets. In this paper, we present a novel, multi-modal sensor dataset that encompasses nine indoor activities, performed by 16 participants, and captured by four types of sensors that are commonly used in indoor applications and autonomous vehicles. This multimodal dataset is the first of its kind to be made openly available and can be exploited for many applications that require HAR, including sports analytics, healthcare assistance and indoor intelligent mobility. We propose a novel data preprocessing algorithm to enable adaptive feature extraction from the dataset to be utilized by different machine learning algorithms. Through rigorous experimental evaluations, this paper reviews the performance of machine learning approaches to posture recognition, and analyses the robustness of the algorithms. When performing HAR with the RGB-Depth data from our new dataset, machine learning algorithms such as a deep neural network reached a mean accuracy of up to 96.8% for classification across all stationary and dynamic activities

Results

TaskDatasetMetricValueModel
Activity RecognitionLboroHARAccuracy97.9Cubic SVM
Activity RecognitionLboroHARAccuracy95Deep Neural Net
Activity RecognitionLboroHARAccuracy92.5Bagged Trees

Related Papers

Vision-based Perception for Autonomous Vehicles in Obstacle Avoidance Scenarios2025-07-16ZKP-FedEval: Verifiable and Privacy-Preserving Federated Evaluation using Zero-Knowledge Proofs2025-07-15Fast and Accurate Collision Probability Estimation for Autonomous Vehicles using Adaptive Sigma-Point Sampling2025-07-08Robustifying 3D Perception through Least-Squares Multi-Agent Graphs Object Tracking2025-07-07LLM-based Realistic Safety-Critical Driving Video Generation2025-07-02A Survey on Vision-Language-Action Models for Autonomous Driving2025-06-30Where, What, Why: Towards Explainable Driver Attention Prediction2025-06-29Coordinated Control of Autonomous Vehicles for Traffic Density Reduction at a Signalized Junction: An MPC Approach2025-06-26