TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/GEB+: A Benchmark for Generic Event Boundary Captioning, G...

GEB+: A Benchmark for Generic Event Boundary Captioning, Grounding and Retrieval

Yuxuan Wang, Difei Gao, Licheng Yu, Stan Weixian Lei, Matt Feiszli, Mike Zheng Shou

2022-04-01Boundary CaptioningText to Video RetrievalRetrievalBoundary Grounding
PaperPDFCode(official)

Abstract

Cognitive science has shown that humans perceive videos in terms of events separated by the state changes of dominant subjects. State changes trigger new events and are one of the most useful among the large amount of redundant information perceived. However, previous research focuses on the overall understanding of segments without evaluating the fine-grained status changes inside. In this paper, we introduce a new dataset called Kinetic-GEB+. The dataset consists of over 170k boundaries associated with captions describing status changes in the generic events in 12K videos. Upon this new dataset, we propose three tasks supporting the development of a more fine-grained, robust, and human-like understanding of videos through status changes. We evaluate many representative baselines in our dataset, where we also design a new TPD (Temporal-based Pairwise Difference) Modeling method for visual difference and achieve significant performance improvements. Besides, the results show there are still formidable challenges for current methods in the utilization of different granularities, representation of visual difference, and the accurate localization of status changes. Further analysis shows that our dataset can drive developing more powerful methods to understand status changes and thus improve video level comprehension. The dataset is available at https://github.com/showlab/GEB-Plus

Results

TaskDatasetMetricValueModel
VideoKinetics-GEB+F1@0.1s4.28FROZEN-revised
VideoKinetics-GEB+F1@0.2s8.54FROZEN-revised
VideoKinetics-GEB+F1@0.5s18.33FROZEN-revised
VideoKinetics-GEB+F1@1.0s31.04FROZEN-revised
VideoKinetics-GEB+F1@1.5s40.48FROZEN-revised
VideoKinetics-GEB+F1@2.0s47.86FROZEN-revised
VideoKinetics-GEB+F1@2.5s54.81FROZEN-revised
VideoKinetics-GEB+F1@3.0s61.45FROZEN-revised
VideoKinetics-GEB+F1@Avg33.35FROZEN-revised
Video CaptioningKinetics-GEB+CIDEr74.71ActBERT-revised
Video CaptioningKinetics-GEB+ROUGE-L28.15ActBERT-revised
Video CaptioningKinetics-GEB+SPICE19.52ActBERT-revised
Video RetrievalKinetics-GEB+F1@0.1s4.28FROZEN-revised
Video RetrievalKinetics-GEB+F1@0.2s8.54FROZEN-revised
Video RetrievalKinetics-GEB+F1@0.5s18.33FROZEN-revised
Video RetrievalKinetics-GEB+F1@1.0s31.04FROZEN-revised
Video RetrievalKinetics-GEB+F1@1.5s40.48FROZEN-revised
Video RetrievalKinetics-GEB+F1@2.0s47.86FROZEN-revised
Video RetrievalKinetics-GEB+F1@2.5s54.81FROZEN-revised
Video RetrievalKinetics-GEB+F1@3.0s61.45FROZEN-revised
Video RetrievalKinetics-GEB+F1@Avg33.35FROZEN-revised
Video GroundingKinetics-GEB+F1@0.1s4.28FROZEN-revised
Video GroundingKinetics-GEB+F1@0.2s8.54FROZEN-revised
Video GroundingKinetics-GEB+F1@0.5s18.33FROZEN-revised
Video GroundingKinetics-GEB+F1@1.0s31.04FROZEN-revised
Video GroundingKinetics-GEB+F1@1.5s40.48FROZEN-revised
Video GroundingKinetics-GEB+F1@2.0s47.86FROZEN-revised
Video GroundingKinetics-GEB+F1@2.5s54.81FROZEN-revised
Video GroundingKinetics-GEB+F1@3.0s61.45FROZEN-revised
Video GroundingKinetics-GEB+F1@Avg33.35FROZEN-revised
Text to Video RetrievalKinetics-GEB+mAP23.39FROZEN-revised
Text to Video RetrievalKinetics-GEB+text-to-video R@112.8FROZEN-revised (two-stream)
Text to Video RetrievalKinetics-GEB+text-to-video R@1045.66FROZEN-revised (two-stream)
Text to Video RetrievalKinetics-GEB+text-to-video R@534.81FROZEN-revised (two-stream)
Text to Video RetrievalKinetics-GEB+text-to-video R@5068.1FROZEN-revised (two-stream)
10-shot image generationKinetics-GEB+mAP23.39FROZEN-revised
10-shot image generationKinetics-GEB+text-to-video R@112.8FROZEN-revised (two-stream)
10-shot image generationKinetics-GEB+text-to-video R@1045.66FROZEN-revised (two-stream)
10-shot image generationKinetics-GEB+text-to-video R@534.81FROZEN-revised (two-stream)
10-shot image generationKinetics-GEB+text-to-video R@5068.1FROZEN-revised (two-stream)

Related Papers

From Roots to Rewards: Dynamic Tree Reasoning with RL2025-07-17HapticCap: A Multimodal Dataset and Task for Understanding User Experience of Vibration Haptic Signals2025-07-17A Survey of Context Engineering for Large Language Models2025-07-17MCoT-RE: Multi-Faceted Chain-of-Thought and Re-Ranking for Training-Free Zero-Shot Composed Image Retrieval2025-07-17Developing Visual Augmented Q&A System using Scalable Vision Embedding Retrieval & Late Interaction Re-ranker2025-07-16Language-Guided Contrastive Audio-Visual Masked Autoencoder with Automatically Generated Audio-Visual-Text Triplets from Videos2025-07-16Context-Aware Search and Retrieval Over Erasure Channels2025-07-16Seq vs Seq: An Open Suite of Paired Encoders and Decoders2025-07-15