OUTBACK: A Multimodal Synthetic Dataset for Rural Australian Off-road Robot Navigation
One of the most important aspects of robot scene understanding is semantic segmentation of external environments. Urban environment semantic segmentation has been extensively investigated by researchers and many real-world and synthetic datasets have been utilised to develop highly accurate segmentation results. However, the number of off-road datasets available for robot navigation research remains limited. To address this, we introduce a novel framework [1] to generate varied photorealistic synthetic off-road datasets capable of supporting multiple sensor modalities.
Using this approach, a synthetic multimodal dataset for off-road ground robot navigation in typical Western Australian outback conditions has been created. The robot simulations for synthetic dataset generation were conducted using the NVIDIA Isaac Sim robotics simulator platform and the camera, LiDAR, and IMU sensor data was collected using four synthetic off-road environment scenarios. After revising some of the semantic classes introduced in our DICTA 2024 conference paper, we published our OUTBACK dataset, which consists of a total of 7,817 image and LiDAR frames with respective annotations covering 16 semantic classes.