TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Context-Aware Predictive Coding: A Representation Learning...

Context-Aware Predictive Coding: A Representation Learning Framework for WiFi Sensing

Borna Barahimi, Hina Tabassum, Mohammad Omer, Omer Waqar

2024-09-20Representation LearningSelf-Supervised LearningHuman Activity RecognitionTransfer LearningGesture RecognitionActivity Recognition
PaperPDFCode

Abstract

WiFi sensing is an emerging technology that utilizes wireless signals for various sensing applications. However, the reliance on supervised learning and the scarcity of labelled data and the incomprehensible channel state information (CSI) data pose significant challenges. These issues affect deep learning models’ performance and generalization across different environments. Consequently, self-supervised learning (SSL) is emerging as a promising strategy to extract meaningful data representations with minimal reliance on labelled samples. In this paper, we introduce a novel SSL framework called Context-Aware Predictive Coding (CAPC), which effectively learns from unlabelled data and adapts to diverse environments. CAPC integrates elements of Contrastive Predictive Coding (CPC) and the augmentation-based SSL method, Barlow Twins, promoting temporal and contextual consistency in data representations. This hybrid approach captures essential temporal information in CSI, crucial for tasks like human activity recognition (HAR), and ensures robustness against data distortions. Additionally, we propose a unique augmentation, employing both uplink and downlink CSI to isolate free space propagation effects and minimize the impact of electronic distortions of the transceiver. Our evaluations demonstrate that CAPC not only outperforms other SSL methods and supervised approaches, but also achieves superior generalization capabilities. Specifically, CAPC requires fewer labelled samples while significantly outperforming supervised learning by an average margin of 30.53% and surpassing SSL baselines by 6.5% on average in low-labelled data scenarios. Furthermore, our transfer learning studies on an unseen dataset with a different HAR task and environment showcase an accuracy improvement of 1.8% over other SSL baselines and 24.7% over supervised learning, emphasizing its exceptional cross-domain adaptability. These results mark a significant breakthrough in SSL applications for WiFi sensing, highlighting CAPC’s environmental adaptability and reduced dependency on labelled data.

Related Papers

Efficient Deployment of Spiking Neural Networks on SpiNNaker2 for DVS Gesture Recognition Using Neuromorphic Intermediate Representation2025-09-04Touch in the Wild: Learning Fine-Grained Manipulation with a Portable Visuo-Tactile Gripper2025-07-20RaMen: Multi-Strategy Multi-Modal Learning for Bundle Construction2025-07-18Spectral Bellman Method: Unifying Representation and Exploration in RL2025-07-17Boosting Team Modeling through Tempo-Relational Representation Learning2025-07-17A Semi-Supervised Learning Method for the Identification of Bad Exposures in Large Imaging Surveys2025-07-17Disentangling coincident cell events using deep transfer learning and compressive sensing2025-07-17Similarity-Guided Diffusion for Contrastive Sequential Recommendation2025-07-16