TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Agent Journey Beyond RGB: Unveiling Hybrid Semantic-Spatia...

Agent Journey Beyond RGB: Unveiling Hybrid Semantic-Spatial Environmental Representations for Vision-and-Language Navigation

Xuesong Zhang, Yunbo Xu, Jia Li, Zhenzhen Hu, Richnag Hong

2024-12-09Vision-Language NavigationVisual NavigationVision and Language NavigationObject Localization
PaperPDFCode(official)

Abstract

Navigating unseen environments based on natural language instructions remains difficult for egocentric agents in Vision-and-Language Navigation (VLN). Existing approaches primarily rely on RGB images for environmental representation, underutilizing latent textual semantic and spatial cues and leaving the modality gap between instructions and scarce environmental representations unresolved. Intuitively, humans inherently ground semantic knowledge within spatial layouts during indoor navigation. Inspired by this, we propose a versatile Semantic Understanding and Spatial Awareness (SUSA) architecture to encourage agents to ground environment from diverse perspectives. SUSA includes a Textual Semantic Understanding (TSU) module, which narrows the modality gap between instructions and environments by generating and associating the descriptions of environmental landmarks in agent's immediate surroundings. Additionally, a Depth-enhanced Spatial Perception (DSP) module incrementally constructs a depth exploration map, enabling a more nuanced comprehension of environmental layouts. Experiments demonstrate that SUSA's hybrid semantic-spatial representations effectively enhance navigation performance, setting new state-of-the-art performance across three VLN benchmarks (REVERIE, R2R, and SOON). The source code will be publicly available.

Results

TaskDatasetMetricValueModel
Object LocalizationREVERIENav-Length17.86SUSA
Object LocalizationREVERIENav-SPL41.54SUSA
Object LocalizationREVERIENav-Succ54.39SUSA
Object LocalizationREVERIERGS36.11SUSA
Object LocalizationREVERIERGSPL27.31SUSA
Visual NavigationR2Rspl0.6383SUSA
Visual NavigationSOON TestNav-SPL25.47SUSA
Visual NavigationSOON TestSR36.87SUSA

Related Papers

SE-VLN: A Self-Evolving Vision-Language Navigation Framework Based on Multimodal Large Language Models2025-07-17Rethinking the Embodied Gap in Vision-and-Language Navigation: A Holistic Study of Physical and Visual Disparities2025-07-17NavMorph: A Self-Evolving World Model for Vision-and-Language Navigation in Continuous Environments2025-06-30Mask-aware Text-to-Image Retrieval: Referring Expression Segmentation Meets Cross-modal Retrieval2025-06-28VoteSplat: Hough Voting Gaussian Splatting for 3D Scene Understanding2025-06-28RAG-6DPose: Retrieval-Augmented 6D Pose Estimation via Leveraging CAD as Knowledge Base2025-06-23VLN-R1: Vision-Language Navigation via Reinforcement Fine-Tuning2025-06-20CDP: Towards Robust Autoregressive Visuomotor Policy Learning via Causal Diffusion2025-06-17