Runsheng Xu, Hao Xiang, Zhengzhong Tu, Xin Xia, Ming-Hsuan Yang, Jiaqi Ma
In this paper, we investigate the application of Vehicle-to-Everything (V2X) communication to improve the perception performance of autonomous vehicles. We present a robust cooperative perception framework with V2X communication using a novel vision Transformer. Specifically, we build a holistic attention model, namely V2X-ViT, to effectively fuse information across on-road agents (i.e., vehicles and infrastructure). V2X-ViT consists of alternating layers of heterogeneous multi-agent self-attention and multi-scale window self-attention, which captures inter-agent interaction and per-agent spatial relationships. These key modules are designed in a unified Transformer architecture to handle common V2X challenges, including asynchronous information sharing, pose errors, and heterogeneity of V2X components. To validate our approach, we create a large-scale V2X perception dataset using CARLA and OpenCDA. Extensive experimental results demonstrate that V2X-ViT sets new state-of-the-art performance for 3D object detection and achieves robust performance even under harsh, noisy environments. The code is available at https://github.com/DerrickXuNu/v2x-vit.
| Task | Dataset | Metric | Value | Model |
|---|---|---|---|---|
| Object Detection | V2XSet | AP0.5 (Noisy) | 0.836 | V2X-ViT |
| Object Detection | V2XSet | AP0.5 (Perfect) | 0.882 | V2X-ViT |
| Object Detection | V2XSet | AP0.7 (Noisy) | 0.614 | V2X-ViT |
| Object Detection | V2XSet | AP0.7 (Perfect) | 0.712 | V2X-ViT |
| Object Detection | V2X-SIM | mAOE | 0.383 | V2X-ViT |
| Object Detection | V2X-SIM | mAP | 22.4 | V2X-ViT |
| Object Detection | V2X-SIM | mASE | 0.25 | V2X-ViT |
| Object Detection | V2X-SIM | mATE | 0.848 | V2X-ViT |
| 3D | V2XSet | AP0.5 (Noisy) | 0.836 | V2X-ViT |
| 3D | V2XSet | AP0.5 (Perfect) | 0.882 | V2X-ViT |
| 3D | V2XSet | AP0.7 (Noisy) | 0.614 | V2X-ViT |
| 3D | V2XSet | AP0.7 (Perfect) | 0.712 | V2X-ViT |
| 3D | V2X-SIM | mAOE | 0.383 | V2X-ViT |
| 3D | V2X-SIM | mAP | 22.4 | V2X-ViT |
| 3D | V2X-SIM | mASE | 0.25 | V2X-ViT |
| 3D | V2X-SIM | mATE | 0.848 | V2X-ViT |
| 3D Object Detection | V2XSet | AP0.5 (Noisy) | 0.836 | V2X-ViT |
| 3D Object Detection | V2XSet | AP0.5 (Perfect) | 0.882 | V2X-ViT |
| 3D Object Detection | V2XSet | AP0.7 (Noisy) | 0.614 | V2X-ViT |
| 3D Object Detection | V2XSet | AP0.7 (Perfect) | 0.712 | V2X-ViT |
| 3D Object Detection | V2X-SIM | mAOE | 0.383 | V2X-ViT |
| 3D Object Detection | V2X-SIM | mAP | 22.4 | V2X-ViT |
| 3D Object Detection | V2X-SIM | mASE | 0.25 | V2X-ViT |
| 3D Object Detection | V2X-SIM | mATE | 0.848 | V2X-ViT |
| 2D Classification | V2XSet | AP0.5 (Noisy) | 0.836 | V2X-ViT |
| 2D Classification | V2XSet | AP0.5 (Perfect) | 0.882 | V2X-ViT |
| 2D Classification | V2XSet | AP0.7 (Noisy) | 0.614 | V2X-ViT |
| 2D Classification | V2XSet | AP0.7 (Perfect) | 0.712 | V2X-ViT |
| 2D Classification | V2X-SIM | mAOE | 0.383 | V2X-ViT |
| 2D Classification | V2X-SIM | mAP | 22.4 | V2X-ViT |
| 2D Classification | V2X-SIM | mASE | 0.25 | V2X-ViT |
| 2D Classification | V2X-SIM | mATE | 0.848 | V2X-ViT |
| 2D Object Detection | V2XSet | AP0.5 (Noisy) | 0.836 | V2X-ViT |
| 2D Object Detection | V2XSet | AP0.5 (Perfect) | 0.882 | V2X-ViT |
| 2D Object Detection | V2XSet | AP0.7 (Noisy) | 0.614 | V2X-ViT |
| 2D Object Detection | V2XSet | AP0.7 (Perfect) | 0.712 | V2X-ViT |
| 2D Object Detection | V2X-SIM | mAOE | 0.383 | V2X-ViT |
| 2D Object Detection | V2X-SIM | mAP | 22.4 | V2X-ViT |
| 2D Object Detection | V2X-SIM | mASE | 0.25 | V2X-ViT |
| 2D Object Detection | V2X-SIM | mATE | 0.848 | V2X-ViT |
| 16k | V2XSet | AP0.5 (Noisy) | 0.836 | V2X-ViT |
| 16k | V2XSet | AP0.5 (Perfect) | 0.882 | V2X-ViT |
| 16k | V2XSet | AP0.7 (Noisy) | 0.614 | V2X-ViT |
| 16k | V2XSet | AP0.7 (Perfect) | 0.712 | V2X-ViT |
| 16k | V2X-SIM | mAOE | 0.383 | V2X-ViT |
| 16k | V2X-SIM | mAP | 22.4 | V2X-ViT |
| 16k | V2X-SIM | mASE | 0.25 | V2X-ViT |
| 16k | V2X-SIM | mATE | 0.848 | V2X-ViT |