TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Can't make an Omelette without Breaking some Eggs: Plausib...

Can't make an Omelette without Breaking some Eggs: Plausible Action Anticipation using Large Video-Language Models

Himangi Mittal, Nakul Agarwal, Shao-Yuan Lo, Kwonjoon Lee

2024-05-30CVPR 2024 1Long Term Action AnticipationAction AnticipationLanguage Modelling
PaperPDF

Abstract

We introduce PlausiVL, a large video-language model for anticipating action sequences that are plausible in the real-world. While significant efforts have been made towards anticipating future actions, prior approaches do not take into account the aspect of plausibility in an action sequence. To address this limitation, we explore the generative capability of a large video-language model in our work and further, develop the understanding of plausibility in an action sequence by introducing two objective functions, a counterfactual-based plausible action sequence learning loss and a long-horizon action repetition loss. We utilize temporal logical constraints as well as verb-noun action pair logical constraints to create implausible/counterfactual action sequences and use them to train the model with plausible action sequence learning loss. This loss helps the model to differentiate between plausible and not plausible action sequences and also helps the model to learn implicit temporal cues crucial for the task of action anticipation. The long-horizon action repetition loss puts a higher penalty on the actions that are more prone to repetition over a longer temporal window. With this penalization, the model is able to generate diverse, plausible action sequences. We evaluate our approach on two large-scale datasets, Ego4D and EPIC-Kitchens-100, and show improvements on the task of action anticipation.

Results

TaskDatasetMetricValueModel
Activity RecognitionEPIC-KITCHENS-100Recall@527.6PlausiVL
Activity RecognitionEPIC-KITCHENS-100Top-5 Noun54.23PlausiVL
Activity RecognitionEPIC-KITCHENS-100Top-5 Verb55.62PlausiVL
Action RecognitionEPIC-KITCHENS-100Recall@527.6PlausiVL
Action RecognitionEPIC-KITCHENS-100Top-5 Noun54.23PlausiVL
Action RecognitionEPIC-KITCHENS-100Top-5 Verb55.62PlausiVL
Action AnticipationEPIC-KITCHENS-100Recall@527.6PlausiVL
Action AnticipationEPIC-KITCHENS-100Top-5 Noun54.23PlausiVL
Action AnticipationEPIC-KITCHENS-100Top-5 Verb55.62PlausiVL
2D Human Pose EstimationEPIC-KITCHENS-100Recall@527.6PlausiVL
2D Human Pose EstimationEPIC-KITCHENS-100Top-5 Noun54.23PlausiVL
2D Human Pose EstimationEPIC-KITCHENS-100Top-5 Verb55.62PlausiVL
Action Recognition In VideosEPIC-KITCHENS-100Recall@527.6PlausiVL
Action Recognition In VideosEPIC-KITCHENS-100Top-5 Noun54.23PlausiVL
Action Recognition In VideosEPIC-KITCHENS-100Top-5 Verb55.62PlausiVL

Related Papers

Visual-Language Model Knowledge Distillation Method for Image Quality Assessment2025-07-21Making Language Model a Hierarchical Classifier and Generator2025-07-17VisionThink: Smart and Efficient Vision Language Model via Reinforcement Learning2025-07-17The Generative Energy Arena (GEA): Incorporating Energy Awareness in Large Language Model (LLM) Human Evaluations2025-07-17Inverse Reinforcement Learning Meets Large Language Model Post-Training: Basics, Advances, and Opportunities2025-07-17Assay2Mol: large language model-based drug design using BioAssay context2025-07-16Describe Anything Model for Visual Question Answering on Text-rich Images2025-07-16InstructFLIP: Exploring Unified Vision-Language Model for Face Anti-spoofing2025-07-16