TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Towards Generalist Robot Policies: What Matters in Buildin...

Towards Generalist Robot Policies: What Matters in Building Vision-Language-Action Models

Xinghang Li, Peiyan Li, Minghuan Liu, Dong Wang, Jirong Liu, Bingyi Kang, Xiao Ma, Tao Kong, Hanbo Zhang, Huaping Liu

2024-12-18Representation LearningRobot ManipulationVision-Language-Action
PaperPDFCode

Abstract

Foundation Vision Language Models (VLMs) exhibit strong capabilities in multi-modal representation learning, comprehension, and reasoning. By injecting action components into the VLMs, Vision-Language-Action Models (VLAs) can be naturally formed and also show promising performance. Existing work has demonstrated the effectiveness and generalization of VLAs in multiple scenarios and tasks. Nevertheless, the transfer from VLMs to VLAs is not trivial since existing VLAs differ in their backbones, action-prediction formulations, data distributions, and training recipes. This leads to a missing piece for a systematic understanding of the design choices of VLAs. In this work, we disclose the key factors that significantly influence the performance of VLA and focus on answering three essential design choices: which backbone to select, how to formulate the VLA architectures, and when to add cross-embodiment data. The obtained results convince us firmly to explain why we need VLA and develop a new family of VLAs, RoboVLMs, which require very few manual designs and achieve a new state-of-the-art performance in three simulation tasks and real-world experiments. Through our extensive experiments, which include over 8 VLM backbones, 4 policy architectures, and over 600 distinct designed experiments, we provide a detailed guidebook for the future design of VLAs. In addition to the study, the highly flexible RoboVLMs framework, which supports easy integrations of new VLMs and free combinations of various design choices, is made public to facilitate future research. We open-source all details, including codes, models, datasets, and toolkits, along with detailed training and evaluation recipes at: robovlms.github.io.

Results

TaskDatasetMetricValueModel
Robot ManipulationCALVINavg. sequence length (D to D)4.25RoboVLMs
Robot ManipulationSimplerEnv-Google RobotVariant Aggregation0.463RoboVLM
Robot ManipulationSimplerEnv-Google RobotVariant Aggregation-Move Near0.56RoboVLM
Robot ManipulationSimplerEnv-Google RobotVariant Aggregation-Open/Close Drawer0.085RoboVLM
Robot ManipulationSimplerEnv-Google RobotVariant Aggregation-Pick Coke Can0.683RoboVLM
Robot ManipulationSimplerEnv-Google RobotVisual Matching0.563RoboVLM
Robot ManipulationSimplerEnv-Google RobotVisual Matching-Move Near0.663RoboVLM
Robot ManipulationSimplerEnv-Google RobotVisual Matching-Open/Close Drawer0.268RoboVLM
Robot ManipulationSimplerEnv-Google RobotVisual Matching-Pick Coke Can0.727RoboVLM
Robot ManipulationSimplerEnv-Widow XAverage0.135RoboVLM
Robot ManipulationSimplerEnv-Widow XPut Carrot on Plate0.25RoboVLM
Robot ManipulationSimplerEnv-Widow XPut Spoon on Towel0.208RoboVLM
Robot ManipulationSimplerEnv-Widow XStack Green Block on Yellow Block0.083RoboVLM

Related Papers

Touch in the Wild: Learning Fine-Grained Manipulation with a Portable Visuo-Tactile Gripper2025-07-20Spectral Bellman Method: Unifying Representation and Exploration in RL2025-07-17Boosting Team Modeling through Tempo-Relational Representation Learning2025-07-17LaViPlan : Language-Guided Visual Path Planning with RLVR2025-07-17AnyPos: Automated Task-Agnostic Actions for Bimanual Manipulation2025-07-17Similarity-Guided Diffusion for Contrastive Sequential Recommendation2025-07-16Are encoders able to learn landmarkers for warm-starting of Hyperparameter Optimization?2025-07-16Language-Guided Contrastive Audio-Visual Masked Autoencoder with Automatically Generated Audio-Visual-Text Triplets from Videos2025-07-16