TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Few-Shot Classification of Interactive Activities of Daily...

Few-Shot Classification of Interactive Activities of Daily Living (InteractADL)

Zane Durante, Robathan Harries, Edward Vendrow, Zelun Luo, Yuta Kyuragi, Kazuki Kozuka, Li Fei-Fei, Ehsan Adeli

2024-06-03Few Shot Action RecognitionVideo ClassificationFine-Grained Visual RecognitionFine-Grained Image Classification
PaperPDFCode(official)

Abstract

Understanding Activities of Daily Living (ADLs) is a crucial step for different applications including assistive robots, smart homes, and healthcare. However, to date, few benchmarks and methods have focused on complex ADLs, especially those involving multi-person interactions in home environments. In this paper, we propose a new dataset and benchmark, InteractADL, for understanding complex ADLs that involve interaction between humans (and objects). Furthermore, complex ADLs occurring in home environments comprise a challenging long-tailed distribution due to the rarity of multi-person interactions, and pose fine-grained visual recognition tasks due to the presence of semantically and visually similar classes. To address these issues, we propose a novel method for fine-grained few-shot video classification called Name Tuning that enables greater semantic separability by learning optimal class name vectors. We show that Name Tuning can be combined with existing prompt tuning strategies to learn the entire input text (rather than only learning the prompt or class names) and demonstrate improved performance for few-shot classification on InteractADL and 4 other fine-grained visual classification benchmarks. For transparency and reproducibility, we release our code at https://github.com/zanedurante/vlm_benchmark.

Results

TaskDatasetMetricValueModel
Activity RecognitionKinetics-100Accuracy94.7Name Tuning
Activity RecognitionMOMA-LRGActivity Classification Accuracy (5-shot 5-way)97.9Name Tuning
Activity RecognitionMOMA-LRGSubactivity Classification Accuracy (5-shot 5-way)78.2Name Tuning
Action RecognitionKinetics-100Accuracy94.7Name Tuning
Action RecognitionMOMA-LRGActivity Classification Accuracy (5-shot 5-way)97.9Name Tuning
Action RecognitionMOMA-LRGSubactivity Classification Accuracy (5-shot 5-way)78.2Name Tuning

Related Papers

ActAlign: Zero-Shot Fine-Grained Video Classification via Language-Guided Sequence Alignment2025-06-28Hierarchical Mask-Enhanced Dual Reconstruction Network for Few-Shot Fine-Grained Image Classification2025-06-25Active Multimodal Distillation for Few-shot Action Recognition2025-06-16Exploring Audio Cues for Enhanced Test-Time Video Model Adaptation2025-06-14Structural feature enhanced transformer for fine-grained image recognition2025-06-14GPLQ: A General, Practical, and Lightning QAT Method for Vision Transformers2025-06-13Spatiotemporal Analysis of Forest Machine Operations Using 3D Video Classification2025-05-30Towards Privacy-Preserving Fine-Grained Visual Classification via Hierarchical Learning from Label Proportions2025-05-29