Cam2BEV
The dataset contains two subsets of synthetic, semantically segmented road-scene images, which have been created for developing and applying the methodology described in the paper "A Sim2Real Deep Learning Approach for the Transformation of Images from Multiple Vehicle-Mounted Cameras to a Semantically Segmented Image in Bird’s Eye View" (IEEE Xplore, arXiv, YouTube)
The dataset can be used through the official code implementation of the Cam2BEV methodology described on Github.
| Dataset | # Training Samples | # Validation Samples | # Vehicle Cameras | # Semantic Classes | Contained Images (examples) | | --- | --- | --- | --- | --- | --- | | Dataset 1: 360° Surround | 33199 | 3731 | 4 (front, rear, left, right) | 30 (CityScapes) | front camera, rear camera, left camera, right camera, bird's eye view, [bird's eye view incl. occlusion](https://gitlab.ika.rwth-aachen.de/cam2bev/cam2bev-data/-/raw/master/1_FRLR/examples bev+occlusion.png), homography view | | Dataset 2: Front Camera only | 32246 | 3172 | 1 (front) | 30 (CityScapes) | front camera, bird's eye view, bird's eye view incl. occlusion, homography view |