TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/SpatialVLA: Exploring Spatial Representations for Visual-L...

SpatialVLA: Exploring Spatial Representations for Visual-Language-Action Model

Delin Qu, Haoming Song, Qizhi Chen, Yuanqi Yao, Xinyi Ye, Yan Ding, Zhigang Wang, Jiayuan Gu, Bin Zhao, Dong Wang, Xuelong Li

2025-01-27Robot Manipulation
PaperPDF

Abstract

In this paper, we claim that spatial understanding is the keypoint in robot manipulation, and propose SpatialVLA to explore effective spatial representations for the robot foundation model. Specifically, we introduce Ego3D Position Encoding to inject 3D information into the input observations of the visual-language-action model, and propose Adaptive Action Grids to represent spatial robot movement actions with adaptive discretized action grids, facilitating learning generalizable and transferrable spatial action knowledge for cross-robot control. SpatialVLA is first pre-trained on top of a vision-language model with 1.1 Million real-world robot episodes, to learn a generalist manipulation policy across multiple robot environments and tasks. After pre-training, SpatialVLA is directly applied to perform numerous tasks in a zero-shot manner. The superior results in both simulation and real-world robots demonstrate its advantage of inferring complex robot motion trajectories and its strong in-domain multi-task generalization ability. We further show the proposed Adaptive Action Grids offer a new and effective way to fine-tune the pre-trained SpatialVLA model for new simulation and real-world setups, where the pre-learned action grids are re-discretized to capture robot-specific spatial action movements of new setups. The superior results from extensive evaluations demonstrate the exceptional in-distribution generalization and out-of-distribution adaptation capability, highlighting the crucial benefit of the proposed spatial-aware representations for generalist robot policy learning. All the details and codes will be open-sourced.

Results

TaskDatasetMetricValueModel
Robot ManipulationSimplerEnv-Google RobotVariant Aggregation0.688SpatialVLA
Robot ManipulationSimplerEnv-Google RobotVariant Aggregation-Move Near0.717SpatialVLA
Robot ManipulationSimplerEnv-Google RobotVariant Aggregation-Open/Close Drawer0.362SpatialVLA
Robot ManipulationSimplerEnv-Google RobotVariant Aggregation-Pick Coke Can0.895SpatialVLA
Robot ManipulationSimplerEnv-Google RobotVisual Matching0.719SpatialVLA
Robot ManipulationSimplerEnv-Google RobotVisual Matching-Move Near0.696SpatialVLA
Robot ManipulationSimplerEnv-Google RobotVisual Matching-Open/Close Drawer0.593SpatialVLA
Robot ManipulationSimplerEnv-Google RobotVisual Matching-Pick Coke Can0.81SpatialVLA
Robot ManipulationSimplerEnv-Widow XAverage0.344SpatialVLA
Robot ManipulationSimplerEnv-Widow XPut Carrot on Plate0.208SpatialVLA
Robot ManipulationSimplerEnv-Widow XPut Spoon on Towel0.208SpatialVLA
Robot ManipulationSimplerEnv-Widow XStack Green Block on Yellow Block0.25SpatialVLA

Related Papers

DreamVLA: A Vision-Language-Action Model Dreamed with Comprehensive World Knowledge2025-07-06Geometry-aware 4D Video Generation for Robot Manipulation2025-07-01CapsDT: Diffusion-Transformer for Capsule Robot Manipulation2025-06-19Robust Instant Policy: Leveraging Student's t-Regression Model for Robust In-context Imitation Learning of Robot Manipulation2025-06-18SENIOR: Efficient Query Selection and Preference-Guided Exploration in Preference-based Reinforcement Learning2025-06-17What Matters in Learning from Large-Scale Datasets for Robot Manipulation2025-06-16Demonstrating Multi-Suction Item Picking at Scale via Multi-Modal Learning of Pick Success2025-06-12BridgeVLA: Input-Output Alignment for Efficient 3D Manipulation Learning with Vision-Language Models2025-06-09