Ricardo Garcia, ShiZhe Chen, Cordelia Schmid
Generalizing language-conditioned robotic policies to new tasks remains a significant challenge, hampered by the lack of suitable simulation benchmarks. In this paper, we address this gap by introducing GemBench, a novel benchmark to assess generalization capabilities of vision-language robotic manipulation policies. GemBench incorporates seven general action primitives and four levels of generalization, spanning novel placements, rigid and articulated objects, and complex long-horizon tasks. We evaluate state-of-the-art approaches on GemBench and also introduce a new method. Our approach 3D-LOTUS leverages rich 3D information for action prediction conditioned on language. While 3D-LOTUS excels in both efficiency and performance on seen tasks, it struggles with novel tasks. To address this, we present 3D-LOTUS++, a framework that integrates 3D-LOTUS's motion planning capabilities with the task planning capabilities of LLMs and the object grounding accuracy of VLMs. 3D-LOTUS++ achieves state-of-the-art performance on novel tasks of GemBench, setting a new standard for generalization in robotic manipulation. The benchmark, codes and trained models are available at https://www.di.ens.fr/willow/research/gembench/.
| Task | Dataset | Metric | Value | Model |
|---|---|---|---|---|
| Robot Manipulation | RLBench | Inference Speed (fps) | 9.5 | 3D-LOTUS |
| Robot Manipulation | RLBench | Input Image Size | 256 | 3D-LOTUS |
| Robot Manipulation | RLBench | Succ. Rate (18 tasks, 100 demo/task) | 83.1 | 3D-LOTUS |
| Robot Manipulation | RLBench | Training Time (A100 x hour) | 40 | 3D-LOTUS |
| Robot Manipulation | RLBench | Training Time (V100 x 8 x day) | 0.28 | 3D-LOTUS |
| Robot Manipulation | GEMBench | Average Success Rate | 48 | 3D-LOTUS++ |
| Robot Manipulation | GEMBench | Average Success Rate | 45.7 | 3D-LOTUS |