TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Scalable Adaptive Computation for Iterative Generation

Scalable Adaptive Computation for Iterative Generation

Allan Jabri, David Fleet, Ting Chen

2022-12-22Video PredictionImage GenerationVideo Generation
PaperPDFCode(official)Code

Abstract

Natural data is redundant yet predominant architectures tile computation uniformly across their input and output space. We propose the Recurrent Interface Networks (RINs), an attention-based architecture that decouples its core computation from the dimensionality of the data, enabling adaptive computation for more scalable generation of high-dimensional data. RINs focus the bulk of computation (i.e. global self-attention) on a set of latent tokens, using cross-attention to read and write (i.e. route) information between latent and data tokens. Stacking RIN blocks allows bottom-up (data to latent) and top-down (latent to data) feedback, leading to deeper and more expressive routing. While this routing introduces challenges, this is less problematic in recurrent computation settings where the task (and routing problem) changes gradually, such as iterative generation with diffusion models. We show how to leverage recurrence by conditioning the latent tokens at each forward pass of the reverse diffusion process with those from prior computation, i.e. latent self-conditioning. RINs yield state-of-the-art pixel diffusion models for image and video generation, scaling to 1024X1024 images without cascades or guidance, while being domain-agnostic and up to 10X more efficient than 2D and 3D U-Nets.

Results

TaskDatasetMetricValueModel
Image GenerationImageNet 64x64FID1.23RIN
Image GenerationImageNet 128x128FID2.75RIN
Image GenerationImageNet 128x128IS144.1RIN
Image GenerationImageNet 256x256FID4.51RIN
VideoKinetics-600 12 frames, 64x64FVD10.8RIN (1000 steps)
VideoKinetics-600 12 frames, 64x64IS17.7RIN (1000 steps)
VideoKinetics-600 12 frames, 64x64FVD11.5RIN (400 steps)
VideoKinetics-600 12 frames, 64x64IS17.7RIN (400 steps)
Video PredictionKinetics-600 12 frames, 64x64FVD10.8RIN (1000 steps)
Video PredictionKinetics-600 12 frames, 64x64IS17.7RIN (1000 steps)
Video PredictionKinetics-600 12 frames, 64x64FVD11.5RIN (400 steps)
Video PredictionKinetics-600 12 frames, 64x64IS17.7RIN (400 steps)

Related Papers

fastWDM3D: Fast and Accurate 3D Healthy Tissue Inpainting2025-07-17Synthesizing Reality: Leveraging the Generative AI-Powered Platform Midjourney for Construction Worker Detection2025-07-17FashionPose: Text to Pose to Relight Image Generation for Personalized Fashion Visualization2025-07-17A Distributed Generative AI Approach for Heterogeneous Multi-Domain Environments under Data Sharing constraints2025-07-17Pixel Perfect MegaMed: A Megapixel-Scale Vision-Language Foundation Model for Generating High Resolution Medical Images2025-07-17World Model-Based End-to-End Scene Generation for Accident Anticipation in Autonomous Driving2025-07-17Leveraging Pre-Trained Visual Models for AI-Generated Video Detection2025-07-17Taming Diffusion Transformer for Real-Time Mobile Video Generation2025-07-17