TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Learning Object Permanence from Video

Learning Object Permanence from Video

Aviv Shamsian, Ofri Kleinfeld, Amir Globerson, Gal Chechik

2020-03-23ECCV 2020 8Video Object Tracking
PaperPDFCode

Abstract

Object Permanence allows people to reason about the location of non-visible objects, by understanding that they continue to exist even when not perceived directly. Object Permanence is critical for building a model of the world, since objects in natural visual scenes dynamically occlude and contain each-other. Intensive studies in developmental psychology suggest that object permanence is a challenging task that is learned through extensive experience. Here we introduce the setup of learning Object Permanence from data. We explain why this learning problem should be dissected into four components, where objects are (1) visible, (2) occluded, (3) contained by another object and (4) carried by a containing object. The fourth subtask, where a target object is carried by a containing object, is particularly challenging because it requires a system to reason about a moving location of an invisible object. We then present a unified deep architecture that learns to predict object location under these four scenarios. We evaluate the architecture and system on a new dataset based on CATER, and find that it outperforms previous localization methods and various baselines.

Results

TaskDatasetMetricValueModel
VideoCATERL10.54OPNet
VideoCATERTop 1 Accuracy74.8OPNet
Object TrackingCATERL10.54OPNet
Object TrackingCATERTop 1 Accuracy74.8OPNet

Related Papers

HiM2SAM: Enhancing SAM2 with Hierarchical Motion Estimation and Memory Optimization towards Long-term Tracking2025-07-10Enhancing Self-Supervised Fine-Grained Video Object Tracking with Dynamic Memory Prediction2025-04-30Exploiting Multimodal Spatial-temporal Patterns for Video Object Tracking2024-12-20Exploring Enhanced Contextual Information for Video-Level Object Tracking2024-12-15Referring Video Object Segmentation via Language-aligned Track Selection2024-12-02Teaching VLMs to Localize Specific Objects from In-context Examples2024-11-20NT-VOT211: A Large-Scale Benchmark for Night-time Visual Object Tracking2024-10-27Depth Attention for Robust RGB Tracking2024-10-27