TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/RVT: Robotic View Transformer for 3D Object Manipulation

RVT: Robotic View Transformer for 3D Object Manipulation

Ankit Goyal, Jie Xu, Yijie Guo, Valts Blukis, Yu-Wei Chao, Dieter Fox

2023-06-26Robot ManipulationRobot Manipulation Generalization
PaperPDFCode(official)

Abstract

For 3D object manipulation, methods that build an explicit 3D representation perform better than those relying only on camera images. But using explicit 3D representations like voxels comes at large computing cost, adversely affecting scalability. In this work, we propose RVT, a multi-view transformer for 3D manipulation that is both scalable and accurate. Some key features of RVT are an attention mechanism to aggregate information across views and re-rendering of the camera input from virtual views around the robot workspace. In simulations, we find that a single RVT model works well across 18 RLBench tasks with 249 task variations, achieving 26% higher relative success than the existing state-of-the-art method (PerAct). It also trains 36X faster than PerAct for achieving the same performance and achieves 2.3X the inference speed of PerAct. Further, RVT can perform a variety of manipulation tasks in the real world with just a few ($\sim$10) demonstrations per task. Visual results, code, and trained model are provided at https://robotic-view-transformer.github.io/.

Results

TaskDatasetMetricValueModel
Robot ManipulationRLBenchInference Speed (fps)11.6RVT
Robot ManipulationRLBenchInput Image Size128RVT
Robot ManipulationRLBenchSucc. Rate (18 tasks, 100 demo/task)62.9RVT
Robot ManipulationRLBenchTraining Time (V100 x 8 x day)1RVT

Related Papers

DreamVLA: A Vision-Language-Action Model Dreamed with Comprehensive World Knowledge2025-07-06Geometry-aware 4D Video Generation for Robot Manipulation2025-07-01CapsDT: Diffusion-Transformer for Capsule Robot Manipulation2025-06-19Robust Instant Policy: Leveraging Student's t-Regression Model for Robust In-context Imitation Learning of Robot Manipulation2025-06-18SENIOR: Efficient Query Selection and Preference-Guided Exploration in Preference-based Reinforcement Learning2025-06-17What Matters in Learning from Large-Scale Datasets for Robot Manipulation2025-06-16Demonstrating Multi-Suction Item Picking at Scale via Multi-Modal Learning of Pick Success2025-06-12BridgeVLA: Input-Output Alignment for Efficient 3D Manipulation Learning with Vision-Language Models2025-06-09