TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Representation Recycling for Streaming Video Analysis

Representation Recycling for Streaming Video Analysis

Can Ufuk Ertenli, Ramazan Gokberk Cinbis, Emre Akbas

2022-04-28Video Object DetectionSemantic SegmentationPose EstimationVideo Semantic Segmentationobject-detectionObject Detection
PaperPDFCode(official)

Abstract

We present StreamDEQ, a method that aims to infer frame-wise representations on videos with minimal per-frame computation. Conventional deep networks do feature extraction from scratch at each frame in the absence of ad-hoc solutions. We instead aim to build streaming recognition models that can natively exploit temporal smoothness between consecutive video frames. We observe that the recently emerging implicit layer models provide a convenient foundation to construct such models, as they define representations as the fixed-points of shallow networks, which need to be estimated using iterative methods. Our main insight is to distribute the inference iterations over the temporal axis by using the most recent representation as a starting point at each frame. This scheme effectively recycles the recent inference computations and greatly reduces the needed processing time. Through extensive experimental analysis, we show that StreamDEQ is able to recover near-optimal representations in a few frames' time and maintain an up-to-date representation throughout the video duration. Our experiments on video semantic segmentation, video object detection, and human pose estimation in videos show that StreamDEQ achieves on-par accuracy with the baseline while being more than 2-4x faster.

Results

TaskDatasetMetricValueModel
Semantic SegmentationCityscapes valFPS1.1StreamDEQ (8 iterations)
Semantic SegmentationCityscapes valmIoU78.2StreamDEQ (8 iterations)
Semantic SegmentationCityscapes valFPS1.9StreamDEQ (4 iterations)
Semantic SegmentationCityscapes valmIoU71.5StreamDEQ (4 iterations)
Semantic SegmentationCityscapes valFPS2.9StreamDEQ (2 iterations)
Semantic SegmentationCityscapes valmIoU57.9StreamDEQ (2 iterations)
Semantic SegmentationCityscapes valFPS4.3StreamDEQ (1 iterations)
Semantic SegmentationCityscapes valmIoU45.5StreamDEQ (1 iterations)
10-shot image generationCityscapes valFPS1.1StreamDEQ (8 iterations)
10-shot image generationCityscapes valmIoU78.2StreamDEQ (8 iterations)
10-shot image generationCityscapes valFPS1.9StreamDEQ (4 iterations)
10-shot image generationCityscapes valmIoU71.5StreamDEQ (4 iterations)
10-shot image generationCityscapes valFPS2.9StreamDEQ (2 iterations)
10-shot image generationCityscapes valmIoU57.9StreamDEQ (2 iterations)
10-shot image generationCityscapes valFPS4.3StreamDEQ (1 iterations)
10-shot image generationCityscapes valmIoU45.5StreamDEQ (1 iterations)

Related Papers

SeC: Advancing Complex Video Object Segmentation via Progressive Concept Construction2025-07-21DiffOSeg: Omni Medical Image Segmentation via Multi-Expert Collaboration Diffusion Model2025-07-17SCORE: Scene Context Matters in Open-Vocabulary Remote Sensing Instance Segmentation2025-07-17Unified Medical Image Segmentation with State Space Modeling Snake2025-07-17A Privacy-Preserving Semantic-Segmentation Method Using Domain-Adaptation Technique2025-07-17$π^3$: Scalable Permutation-Equivariant Visual Geometry Learning2025-07-17Revisiting Reliability in the Reasoning-based Pose Estimation Benchmark2025-07-17DINO-VO: A Feature-based Visual Odometry Leveraging a Visual Foundation Model2025-07-17