TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Seeing the roads through the trees: A benchmark for modeli...

Seeing the roads through the trees: A benchmark for modeling spatial dependencies with aerial imagery

Caleb Robinson, Isaac Corley, Anthony Ortiz, Rahul Dodhia, Juan M. Lavista Ferres, Peyman Najafirad

2024-01-12Spatial ReasoningRoad SegmentationObject RecognitionSemantic Segmentation
PaperPDFCode(official)Code

Abstract

Fully understanding a complex high-resolution satellite or aerial imagery scene often requires spatial reasoning over a broad relevant context. The human object recognition system is able to understand object in a scene over a long-range relevant context. For example, if a human observes an aerial scene that shows sections of road broken up by tree canopy, then they will be unlikely to conclude that the road has actually been broken up into disjoint pieces by trees and instead think that the canopy of nearby trees is occluding the road. However, there is limited research being conducted to understand long-range context understanding of modern machine learning models. In this work we propose a road segmentation benchmark dataset, Chesapeake Roads Spatial Context (RSC), for evaluating the spatial long-range context understanding of geospatial machine learning models and show how commonly used semantic segmentation models can fail at this task. For example, we show that a U-Net trained to segment roads from background in aerial imagery achieves an 84% recall on unoccluded roads, but just 63.5% recall on roads covered by tree canopy despite being trained to model both the same way. We further analyze how the performance of models changes as the relevant context for a decision (unoccluded roads in our case) varies in distance. We release the code to reproduce our experiments and dataset of imagery and masks to encourage future research in this direction -- https://github.com/isaaccorley/ChesapeakeRSC.

Results

TaskDatasetMetricValueModel
Semantic SegmentationChesapeakeRSCDWR46.5U-Net (ResNet-18)
Semantic SegmentationChesapeakeRSCDWR46.1DeepLabV3+ (ResNet-18)
Semantic SegmentationChesapeakeRSCDWR45.7U-Net (ResNet-50)
Semantic SegmentationChesapeakeRSCDWR10.7FCN
10-shot image generationChesapeakeRSCDWR46.5U-Net (ResNet-18)
10-shot image generationChesapeakeRSCDWR46.1DeepLabV3+ (ResNet-18)
10-shot image generationChesapeakeRSCDWR45.7U-Net (ResNet-50)
10-shot image generationChesapeakeRSCDWR10.7FCN
Road SegmentationChesapeakeRSCDWR46.5U-Net (ResNet-18)
Road SegmentationChesapeakeRSCDWR46.1DeepLabV3+ (ResNet-18)
Road SegmentationChesapeakeRSCDWR45.7U-Net (ResNet-50)
Road SegmentationChesapeakeRSCDWR10.7FCN

Related Papers

SeC: Advancing Complex Video Object Segmentation via Progressive Concept Construction2025-07-21DiffOSeg: Omni Medical Image Segmentation via Multi-Expert Collaboration Diffusion Model2025-07-17SCORE: Scene Context Matters in Open-Vocabulary Remote Sensing Instance Segmentation2025-07-17Unified Medical Image Segmentation with State Space Modeling Snake2025-07-17A Privacy-Preserving Semantic-Segmentation Method Using Domain-Adaptation Technique2025-07-17MindJourney: Test-Time Scaling with World Models for Spatial Reasoning2025-07-16SAMST: A Transformer framework based on SAM pseudo label filtering for remote sensing semi-supervised semantic segmentation2025-07-16Tomato Multi-Angle Multi-Pose Dataset for Fine-Grained Phenotyping2025-07-15