TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Datasets/V2X-SIM

V2X-SIM

Point cloudRGB VideoVideosCustomIntroduced 2022-02-17

V2X-Sim, short for vehicle-to-everything simulation, is the a synthetic collaborative perception dataset in autonomous driving developed by AI4CE Lab at NYU and MediaBrain Group at SJTU to facilitate collaborative perception between multiple vehicles and roadside infrastructure. Data is collected from both roadside and vehicles when they are presented near the same intersection. With information from both the roadside infrastructure and vehicles, the dataset aims to encourage research on collaborative perception tasks.

Although not collected from the real world, highly realistic traffic simulation software is used to ensure the representativeness of the dataset compared to real-world driving scenarios. To be more exact, the traffic flow of the recording files is managed by CARLA-SUMO co-simulation, and three town maps from CARLA are currently used to increase the diversity of the dataset.

Here is a tutorial showing how to load the dataset: https://ai4ce.github.io/V2X-Sim/tutorial.html

Benchmarks

16k/mAP16k/mATE16k/mASE16k/mAOE2D Classification/mAP2D Classification/mATE2D Classification/mASE2D Classification/mAOE2D Object Detection/mAP2D Object Detection/mATE2D Object Detection/mASE2D Object Detection/mAOE3D/mAP3D/mATE3D/mASE3D/mAOE3D Object Detection/mAP3D Object Detection/mATE3D Object Detection/mASE3D Object Detection/mAOEObject Detection/mAPObject Detection/mATEObject Detection/mASEObject Detection/mAOE

Statistics

Papers
32
Benchmarks
24

Links

Homepage

Tasks

16k2D Classification2D Object Detection3D3D Object DetectionObject Detection