TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/HS3-Bench: A Benchmark and Strong Baseline for Hyperspectr...

HS3-Bench: A Benchmark and Strong Baseline for Hyperspectral Semantic Segmentation in Driving Scenarios

Nick Theisen, Robin Bartsch, Dietrich Paulus, Peer Neubert

2024-09-17Hyperspectral Image SegmentationHyperspectral Semantic SegmentationSegmentationAutonomous DrivingSemantic Segmentation
PaperPDFCode(official)

Abstract

Semantic segmentation is an essential step for many vision applications in order to understand a scene and the objects within. Recent progress in hyperspectral imaging technology enables the application in driving scenarios and the hope is that the devices perceptive abilities provide an advantage over RGB-cameras. Even though some datasets exist, there is no standard benchmark available to systematically measure progress on this task and evaluate the benefit of hyperspectral data. In this paper, we work towards closing this gap by providing the HyperSpectral Semantic Segmentation benchmark (HS3-Bench). It combines annotated hyperspectral images from three driving scenario datasets and provides standardized metrics, implementations, and evaluation protocols. We use the benchmark to derive two strong baseline models that surpass the previous state-of-the-art performances with and without pre-training on the individual datasets. Further, our results indicate that the existing learning-based methods benefit more from leveraging additional RGB training data than from leveraging the additional hyperspectral channels. This poses important questions for future research on hyperspectral imaging for semantic segmentation in driving scenarios. Code to run the benchmark and the strong baseline approaches are available under https://github.com/nickstheisen/hyperseg.

Results

TaskDatasetMetricValueModel
Semantic SegmentationHyperspectral CityAccuracy 87.63RU-Net
Semantic SegmentationHyperspectral CityAverage Accuracy54.14RU-Net
Semantic SegmentationHyperspectral CityAvg. F153.26RU-Net
Semantic SegmentationHyperspectral CityJaccard (Mean)43.33RU-Net
Semantic SegmentationHyperspectral CityAccuracy 86.6DeepLabV3+
Semantic SegmentationHyperspectral CityAverage Accuracy53.15DeepLabV3+
Semantic SegmentationHyperspectral CityAvg. F151.83DeepLabV3+
Semantic SegmentationHyperspectral CityJaccard (Mean)40.79DeepLabV3+
Semantic SegmentationHyperspectral CityAccuracy 85.25U-Net
Semantic SegmentationHyperspectral CityAverage Accuracy48.62U-Net
Semantic SegmentationHyperspectral CityAvg. F148.18U-Net
Semantic SegmentationHyperspectral CityJaccard (Mean)37.73U-Net
Semantic SegmentationHSI-Drive v2.0Accuracy96.08RU-Net
Semantic SegmentationHSI-Drive v2.0Average Accuracy79.82RU-Net
Semantic SegmentationHSI-Drive v2.0Avg. F182.34RU-Net
Semantic SegmentationHSI-Drive v2.0Jaccard (Mean)72.18RU-Net
Semantic SegmentationHSI-Drive v2.0Accuracy94.95U-Net
Semantic SegmentationHSI-Drive v2.0Average Accuracy74.74U-Net
Semantic SegmentationHSI-Drive v2.0Avg. F176.08U-Net
Semantic SegmentationHSI-Drive v2.0Jaccard (Mean)64.95U-Net
Semantic SegmentationHSI-Drive v2.0Accuracy92.51DeepLabV3+
Semantic SegmentationHSI-Drive v2.0Average Accuracy65.58DeepLabV3+
Semantic SegmentationHSI-Drive v2.0Avg. F167.86DeepLabV3+
Semantic SegmentationHSI-Drive v2.0Jaccard (Mean)56.63DeepLabV3+
Semantic SegmentationHyKo2-VISAccuracy86.72RU-Net
Semantic SegmentationHyKo2-VISAverage Accuracy68.79RU-Net
Semantic SegmentationHyKo2-VISAverage Jaccard58.64RU-Net
Semantic SegmentationHyKo2-VISAvg. F169.19RU-Net
Semantic SegmentationHyKo2-VISAccuracy85.36U-Net
Semantic SegmentationHyKo2-VISAverage Accuracy68.15U-Net
Semantic SegmentationHyKo2-VISAverage Jaccard57.39U-Net
Semantic SegmentationHyKo2-VISAvg. F168.55U-Net
Semantic SegmentationHyKo2-VISAccuracy84.1DeepLabV3+
Semantic SegmentationHyKo2-VISAverage Accuracy63.01DeepLabV3+
Semantic SegmentationHyKo2-VISAverage Jaccard53.22DeepLabV3+
Semantic SegmentationHyKo2-VISAvg. F164.9DeepLabV3+
10-shot image generationHyperspectral CityAccuracy 87.63RU-Net
10-shot image generationHyperspectral CityAverage Accuracy54.14RU-Net
10-shot image generationHyperspectral CityAvg. F153.26RU-Net
10-shot image generationHyperspectral CityJaccard (Mean)43.33RU-Net
10-shot image generationHyperspectral CityAccuracy 86.6DeepLabV3+
10-shot image generationHyperspectral CityAverage Accuracy53.15DeepLabV3+
10-shot image generationHyperspectral CityAvg. F151.83DeepLabV3+
10-shot image generationHyperspectral CityJaccard (Mean)40.79DeepLabV3+
10-shot image generationHyperspectral CityAccuracy 85.25U-Net
10-shot image generationHyperspectral CityAverage Accuracy48.62U-Net
10-shot image generationHyperspectral CityAvg. F148.18U-Net
10-shot image generationHyperspectral CityJaccard (Mean)37.73U-Net
10-shot image generationHSI-Drive v2.0Accuracy96.08RU-Net
10-shot image generationHSI-Drive v2.0Average Accuracy79.82RU-Net
10-shot image generationHSI-Drive v2.0Avg. F182.34RU-Net
10-shot image generationHSI-Drive v2.0Jaccard (Mean)72.18RU-Net
10-shot image generationHSI-Drive v2.0Accuracy94.95U-Net
10-shot image generationHSI-Drive v2.0Average Accuracy74.74U-Net
10-shot image generationHSI-Drive v2.0Avg. F176.08U-Net
10-shot image generationHSI-Drive v2.0Jaccard (Mean)64.95U-Net
10-shot image generationHSI-Drive v2.0Accuracy92.51DeepLabV3+
10-shot image generationHSI-Drive v2.0Average Accuracy65.58DeepLabV3+
10-shot image generationHSI-Drive v2.0Avg. F167.86DeepLabV3+
10-shot image generationHSI-Drive v2.0Jaccard (Mean)56.63DeepLabV3+
10-shot image generationHyKo2-VISAccuracy86.72RU-Net
10-shot image generationHyKo2-VISAverage Accuracy68.79RU-Net
10-shot image generationHyKo2-VISAverage Jaccard58.64RU-Net
10-shot image generationHyKo2-VISAvg. F169.19RU-Net
10-shot image generationHyKo2-VISAccuracy85.36U-Net
10-shot image generationHyKo2-VISAverage Accuracy68.15U-Net
10-shot image generationHyKo2-VISAverage Jaccard57.39U-Net
10-shot image generationHyKo2-VISAvg. F168.55U-Net
10-shot image generationHyKo2-VISAccuracy84.1DeepLabV3+
10-shot image generationHyKo2-VISAverage Accuracy63.01DeepLabV3+
10-shot image generationHyKo2-VISAverage Jaccard53.22DeepLabV3+
10-shot image generationHyKo2-VISAvg. F164.9DeepLabV3+

Related Papers

SeC: Advancing Complex Video Object Segmentation via Progressive Concept Construction2025-07-21GEMINUS: Dual-aware Global and Scene-Adaptive Mixture-of-Experts for End-to-End Autonomous Driving2025-07-19AGENTS-LLM: Augmentative GENeration of Challenging Traffic Scenarios with an Agentic LLM Framework2025-07-18Deep Learning-Based Fetal Lung Segmentation from Diffusion-weighted MRI Images and Lung Maturity Evaluation for Fetal Growth Restriction2025-07-17DiffOSeg: Omni Medical Image Segmentation via Multi-Expert Collaboration Diffusion Model2025-07-17From Variability To Accuracy: Conditional Bernoulli Diffusion Models with Consensus-Driven Correction for Thin Structure Segmentation2025-07-17Unleashing Vision Foundation Models for Coronary Artery Segmentation: Parallel ViT-CNN Encoding and Variational Fusion2025-07-17SCORE: Scene Context Matters in Open-Vocabulary Remote Sensing Instance Segmentation2025-07-17